Disclaimer: I am not an investment advisor. When I describe my own trading activities, it is not intended as advice or solicitation of any kind.

30 October 2010

Collaboration is Good

NeighborTrader has been making some comments lately about a trade he has been backtesting.  At first, he was trying to work out a way to make it an intraday trade so that he could run it at the office as part of his job.  A fairly new trader like him tends to prefer that route, because he has a lot more resources to throw at it sooner if it goes well than if he has to save his money to cover the margin.  Unfortunately for him, after playing with a lot of different variables he came to the conclusion that the trade worked best on daily charts, which means long-term holding times.  Since our firm has a day-trading culture and isn't really set up from a risk-management standpoint to hold trades for more than a few hours, that pretty much precludes him running it as part of his job.

Knowing that I've been running long-term trades in paperMoney, he chatted with me yesterday about his trade and the methodology he was using to backtest it.  I have to admit, I'm pretty impressed at how rigorous he's being with it considering: (a) he has no academic or professional experience with formal backtesting; and (b) it's something he's doing for himself on the weekends and committing very little capital to.  He even went so far as to buy historical data, something most of the guys at the office don't do for their big trades.  He also bought a book to learn proper backtesting methods to minimize the chance of sample bias and curve-fitting.

Since it's his trade, I don't think it's right for me to go into it in detail on a public blog.  He gave me all the information I need to run it myself, and suggested some products to run it in, and I plan to do so, although I can't think of a good name for it right now.  But I'll leave the parameters a little hazy to protect his intellectual property.  Suffice to say that it is pretty similar to CS|MACO in that it looks to enter positions contrary to market consensus, but only to do so when it isn't fighting a strong trend.  It seeks to buy dips and sell spikes, and it's purely technical, using indicators widely available on most charting packages.  It also trades very infrequently, so I might have to run it on more than one product just to avoid being bored.

He's been running it in S&P-500 Futures (it needs a lot of leverage to succeed, and he understands futures very well since that's his job) and a couple of other products.  He just exited a trade in it today for a nice fat profit.  Since I already have CS|MACO running on SPY (the S&P-500 ETF), and I have other trades running on other equity indexes (Iron Condors on Russell, Collars on Nasdaq-100), I think I'll run it against US Treasury 10-year Note Futures.  This trades at the CME since they merged with CBOT, and it's available in paperMoney. 

Speaking of CS|MACO, it's been quiet for a while now.  Individual investors have stayed bullish (they've been right for once), and SPY has stayed above its 25-day moving average.  Long+short = flat, so I've been watching this whole move from the sidelines.  The last couple of weeks haven't been good for any trade except iron condors, with the stock market going pretty much sideways.  Something has to give with CS|MACO soon, though, because the 25-day moving average and the closing price are converging.

29 October 2010

Arms Race, Part 4

In Part 3, I went all the way down the pixel level to describe how my automatic Bejeweled player detects the color and type of the gems on the screen.  Today, we'll use that information to get ridiculously high scores in Bejeweled Blitz.

Detecting all the colors was pretty tough, and detecting the type of gem in each cell was even tougher; but now we need the software to make a decision and act on it.  Specifically, of all the possible trigger/target combinations available on the board at the moment, which one is the best one to do next?  I chose to go a pretty simplistic route on this, since I was, after all, writing this for the heck of it.

I do not have the software attempt to predict combos or other more advanced plays to maximize points.  This is something that humans playing the game do to some extent without even thinking about it, but it becomes more challenging for software.  Consider the picture below.  I have highlighted the 4 possible moves by drawing a red blobby line between the trigger and target gems.  Which move will generate the most points?  If you said "the bottom-right one", you would be correct, because both the green and red gems would be removed, not to mention the bonus you get from doing a combo move like that.  By the way, the software would call that move "6,5-7,5".  The best move that the software sees from this board is "4,6-3,6" - swapping the white and yellow gems in the bottom-middle of the board.  I'll explain why this is chosen shortly.

Which move is best?
So skipping the combos and any other move in which you choose things based on secondary effects, I can define some scoring rules by decreasing order of weight:
  1. Always use paths with multipliers in them first - this is a no-brainer, because that has long-term positive effects on the entire game.
  2. If a path has a crosshair in it, it will destroy more gems than any other move on the board - except maybe a hypercube, but the software doesn't detect those, so that's moot.
  3. If a path has a flaming gem in it, its explosion will take out a 3x3 square of gems as well as the path gems themselves, yielding 9-10 gems destroyed on a 3-gem path.
  4. Longer paths are better than shorter ones, because they destroy more gems.  Plus, they generate crosshairs, hypercubes, and flaming gems, so that's goodness too.
  5. Paths lower on the board have more opportunities to create combos than paths near the top.  All else being equal, pick the lowest path on the board.
Going back to our picture and following these rules, we can see that there are no multipliers, no crosshairs or flames, and no 4-gem or greater paths.  That gets us all the way to rule #5, where we pick "4,6-3,6" because its highest gem (all of them) is lower than the highest gem of any other path on the board.

OK, great, we have rules of how to pick a path, but we've gotten a little ahead of ourselves.  How do we find the damn things?  In the software, I have a Grid object with 8x8 GridItem objects (aka Gems) in it.  During the detection phase, I set the color and type of each of those gem objects.  When I'm done, I have a software representation of the board as of the last screen capture.  Then I look at each gem and check to see if it is the leading edge gem of one of the following patterns:

The four path patterns in their basic form
In the first pattern, the gray squares are optional.  Readers familiar with Bejeweled will notice that adding one or both optional squares on will cause the path to generate a flaming gem or a hypercube, respectively.  The first case also has a special sub-case where the mirror-image is also possible, with the trigger below the target rather than above, but only the first of the two optional gems is present.  In this special sub-case, we move the trigger to the optional gem position to form a "T" when the path is completed; this generates a crosshair, so it is obviously more desirable.  If both optional gems are present, however, we leave the trigger where it is because hypercubes are awesome - even if the software doesn't know how to use them right now.

Anyway, in examining each gem to see if it is the leading edge of one of the patterns above, I actually examine it 4 times for each of the 4 patterns above - once with each possible 90-degree rotation of the pattern.  Mirror-images are included as well, but they are done within each rotation, so it's only 16 (4x4) checks on each gem instead of 32 (4x4x2).

Once all the processing above is done, we have a full list of every path available for activating.  For each path, we generate a score based on the 5 rules discussed above, and then sort them by score in descending order.  Then we pick the best possible path and execute it by moving the mouse to the trigger, clicking, moving the mouse to the target, and clicking again.  I do the mouse movement and clicking using the XLib API again: XSetInputFocus, XSync, XWarpPointer, XQueryPointer, and XSendEvent in various combinations that I worked out primarily through internet searching and a lot of trial and error.

As I was coding all of this, I went with the most expedient code possible, rather than the highest-performing.  Again, it was a project being done for the heck of it, so spending a lot of time designing around performance was too much like work.  Nevertheless, as I reached code completion, I started being concerned that the image processing and path detection would take so long that the software wouldn't even outscore me.  After all, I can hit 350k pretty reliably every week, and my highest human score was 652k.  Besides speed, I also have the human judgment that lets me pick the combo path automatically as we saw at the start of the post.

I needn't have worried: the screenshot, the origin detection, the color and state detection, the Grid-building, the path detection, and the scoring and decision process all takes 9 milliseconds to finish.  In fact, this thing is so fast that it finds matches as gems fall that don't really exist.  This causes it to have a high mistake rate, so I actually have it sleep (do nothing) for 50 milliseconds between passes to give Bejeweled a little time to catch up.  Even so, it's still mighty fast.

Below are two videos, neither one using boosts.  The first was captured while I was manually playing, and while it's not exactly a stellar run, it's a fairly representative game, finishing with a score of 166k.  The second is a run using my software.  It gets confused a couple of times, pausing for a full second while it reacquires the origin; and notice how it just ignores the hypercubes and makes a lot of stupid plays.  And yet the score speaks for itself.

video
Playing by hand, for 166k


video
Playing by program, for 698k

Yeah, that'll do.

28 October 2010

Arms Race, Part 3

In Part 1, I explained my motives for writing software to play Bejeweled Blitz.  In Part 2, I defined the terms and general outline of a program to automatically play Bejeweled Blitz.  In Part 3, I'll start at the screen level and work all the way down to the pixel, showing how I detect the color and state of each gem on the grid.

Where to Begin
I run 64-bit Ubuntu at home, and my browser is Firefox.  To capture and analyze the contents of the screen, I used XLib API calls.  In X, every window is laid out in a hierarchy starting with the root window that holds the desktop, taskbars, and all the top-level application windows.  So first I open my display with XOpenDisplay and store it for the life of the capture job, since it gets used throughout the process.  Next, I wrote a function to search a given window for the word "Bejeweled" in its menu text (using XGetWMName).  If found, it returns a handle to the window.  Otherwise it uses XQueryTree to get an array of all the immediate children and recursively calls itself with each of them.  Then it is just a simple matter of calling that function initially with the result of RootWindow(disp, DefaultScreen(disp)).

Now we drop down a level from screen to window: specifically, the Firefox browser in which Bejeweled is running.  To get an image of that window, it is as easy as calling XGetWindowAttributes to see how big it is, and then XGetImage to get an XImage pointer that we can analyze pixel by pixel.  To get a pixel, I use XGetPixel, passing the XImage that resulted from XGetImage, and the X/Y coordinates of the pixel I want.  This returns me the RGB value of the pixel as a long integer, which I can break into separate color levels with a little ANDing and shifting.

The next problem is finding the grid origin.  There are a lot of better ways to do this, but I chose the simple brute-force method.  In the picture on the right, I've put a red box around the group of pixels that I search for to find the top-left corner of the grid.  To do this, I simply examine every pixel until I find one that matches the first pixel in the red box.  Then I check to see if every other pixel in my test section also matches.  If so, I offset to where the grid origin is and proceed.  Otherwise, I move to the next pixel and do it all over again.  Not especially efficient, but it gets the job done.  This spot on the board is key, because it is only there during game play, and it never changes color except for short time periods during hypercube and crosshair detonation.

Once I know the grid origin, I can divide the grid into an 8x8 array of cells, with each cell 40x40 pixels in size.  At this point I had to get a little more creative, because of the dynamic nature of the gems and the board.  The background changes color frequently in response to multiplier changes, power-ups, and game mode.  The gems also spin when they're clicked on, making pixel-by-pixel identification impossible.  The key here is to focus on what matters, and to eliminate that which doesn't.

Special Multiplier Processing
First I test each cell to see if it is a multiplier.  Through a great deal of manual playing and taking screenshots, I discovered that the x2, x3, and x5 multipliers all had the exact same X shape on the gems, but the x4 was a little different - I haven't analyzed anything higher than x5.  I decided the best way to handle this was to look for a set of pixels that were white, forming the X shape, but examine only those pixels that were common to both shapes.  What I ended up with was the set of pixels depicted to the right, where the gray pixels are white only in x4 or only in x2/3/5.  Then it was a simple matter of checking each pixel in the gem in the right positions to see if it was white.  Anything else that wasn't in the must-be-white list could be ignored.

If the previous step determined that the cell is a multiplier, then I do a special color test on it, different from other gems.  I check the color of a single pixel just above the "X".  Based on that color, I know the color of the multiplier cell and I stop processing it further for color.  In the picture on the right you can see the pixelated shape of the white X, as well as a red dot where I check the multiplier for color.

Sensing the Aura
The next bit of special processing is to determine whether the cell is flaming or is a crosshair.  These are almost as important to detect as multipliers, because using them increases the chance of getting a multiplier: especially crosshairs, which will generate a multiplier on every use, so long as the multiplier time limit has expired.  Crosshairs also require some special color processing later, so we need to know if the current cell is a crosshair before we start looking at color.

The way I detect crosshairs and flames is to look at the top-middle of the cell, actually bleeding over into the cell above it by one pixel and extending down two pixels into the current cell.  This area is never occupied by gems at rest, so it is a good place to search for auras.  Based on the average color in this region, I determine if we have a crosshair, a flame, or a normal gem in this cell.

Finally, I check the color.  To do this, I only look at the middle 12x12 square of pixels, because this area is all gem (no background) for all colors and shapes, and never is corrupted by flaming aura.  I take the simple average of each of the RGB values in the pixels to come up with the average color.  For non-crosshair gems, I can be pretty precise because there is seldom any variation.  Crosshairs pulsate, though, so I started by examining a bunch of frames of crosshairs and finding optimum RGB values.  Then the program works outward from these values until the calculated average value fits into the range of one of the colors.  In the picture on the right, we can see a flaming blue gem with a red border around its aura zone and its color zone.

Once I know the color and state, I can move on to the next cell, repeating the process until all 64 cells have been identified.  If falling gems, hypercubes, or other temporary embellishments cause a cell to be undetected or mis-detected, it usually has little to no effect on the outcome of the game.  There is enough going on at any given time that little pockets of misinformation can be absorbed.

The conclusion is in Part 4: Playing the Game

26 October 2010

Arms Race, Part 2

In Part 1, I explained my motives for writing software to play Bejeweled Blitz.  In Part 2, I define some terminology and lay some groundwork for the first step of actual programming.  I assume that readers of this blog have played Bejeweled Blitz before; if not, go to Facebook and waste some time.  When you're done "researching", come on back and read the rest of this post.  Since it's a visual game, I'll need to define some terminology so when I describe a gem type we'll all know what I'm talking about.

The board of Bejeweled Blitz is the entire Flash application, including the non-game screens and the artwork behind and around the actual playing area, which I call the grid.  The grid holds all the gems and other playing pieces in an 8x8 array of cells.  The top-left corner cell's top-left corner is the grid origin.  The X values of cells and pixels increase as we travel to the right, and the Y values of cells and pixels increase as we travel down.  The top-left cell is (0,0), and the bottom-right is (7,7).

Gems can be any of the following colors: Blue, Green, Orange, Purple, Red, White, or Yellow.  The majority of gems are in a normal state, but they can also be explosive (aka flaming), crosshairs, or multipliers.  There is also a non-gem piece called a hypercube (PopCap's term).  These are nearly impossible for my software to identify in a moving board, so I gave up on them and let them be.  Yellow gems can also be in coin form; they still work as yellow gems, but add 100 coins that can be used for boosts between games.  Coins collected during the game count the same as coins left on the board at the end, so in practice yellow gems and yellow coins are equivalent.

Normal
Flaming
Coin
Crosshair
Multiplier
Hypercube

Points are generated by removing the gems from the board, which is done by forming paths of matching gems by swapping two adjacent gems.  Paths can be 3, 4, or 5 gems long.  Depending on the shape and length of the path, flames, crosshairs, or hypercubes can be created from the path.  Since gems fall into the empty spaces left by paths after gem removal, it is possible to get a combo bonus when falling gems fill the holes of a new path.  Multipliers are generated when enough gems are removed in a single move; I think that number is 10-12, but I'm not sure.  There is also a no-multiplier time limit after one is generated, which my software doesn't account for.

Normal gems, coins, and multipliers remove only themselves when matched in a path.  Flames remove a 3x3 square with themselves in the center.  Crosshairs remove the entire row and column upon which they reside.  Hypercubes do not get used in a path: instead, they remove all gems of the same color as whatever gem they are swapped with.

The user must continually look for opportunities to create paths by swapping adjacent gems.  I refer to gems that form a path as path members.  Since only two gems can be swapped per move, we can define the trigger gem as the one which is out of place.  We can likewise define the target gem as the non-matching gem in the trigger's way.  Swapping trigger and target forms the path, removing the gems and activating any special properties in that path.

Speed is of the essence, as mentioned before.  If paths are formed fast enough, a speed bonus is built up; if this is maintained long enough, the game switches into Blazing Speed mode for a few seconds (10, maybe?).  In this mode, all normal trigger gems and coins are treated like explosive flames.  Note that not all members of the path explode - just the one involved directly in the swapping operation.  Thus combos do not get more explosive than normal, either.

A general outline of a working program to play Bejeweled, then, might look like this:
  1. Locate the origin of the grid on the screen
  2. Detect all the gem colors and properties
  3. Build potential paths with triggers and targets
  4. Choose the optimum path and swap its trigger and target gems
  5. Loop back to 2 until the game is over
I felt that the hardest part of the program was detecting the gem colors and properties, so I tackled that first.

Stay tuned for Part 3: Gem Color Detection

24 October 2010

Arms Race, Part 1

Like many of us, I'm on Facebook.  I try to stay away from the social games, since most of them are poorly veiled attempts to hook your brain on electronic crack until you're willing to pay for it (I'm looking at you, Farmville), and some of them are out-and-out information pirates.  But one that I succumbed to completely was Bejeweled Blitz, created by PopCap.  PopCap is truly the master of simple little games that will suck your life away, and I have had my brain claimed by Bejeweled variants from PopCap and other developers before.  In fact, I share this weakness with one of my online poker friends, Missy; I convinced her to sign up for Facebook by telling her about Bejeweled Blitz.  Full disclosure, she was thinking about it anyway, but I think hearing there was a cool new (to her) Bejeweled variant free on Facebook was the final straw.

A quick note about how Bejeweled Blitz differs from normal Bejeweled: it's a 60-second game, so speed is of the essence; there is a personalized high-score board where all of your Bejeweled-playing Facebook friends automatically show up; and they reset the scores every week so that everyone is trying to beat each others' scores all the time.  Here are the top 5 from this weeks' high score board for my group of friends, which I am coincidentally dominating.

Missy and I are both very competitive, and we're used to facing off against each other at the poker table.  It wasn't long - like 3 games, maybe - before she had beaten my high score and I was grimly playing for all I was worth, trying to stay in the #1 spot.  Sixty seconds at a time, I watched hours disappear.  I think it was Missy who coined the phrase "stuck in a Bejeweled loop"; ironically, she was referring to her husband Todd when she first said it, but it easily applied to both of us as well.  Missy became my nemesis, my baby-with-one-eyebrow, my standard against whom my scores were judged.  We both waged psychological warfare, purposely getting semi-OK scores early in the week intending to re-up them when the other person beat the first round.  Sometimes others on my list would set higher scores, and I did my best to beat them, but failing to do so was never as infuriating as allowing Missy to win.

As we both practiced for literally hours per day, we got to be pretty good at it.  In the spirit of competition, Missy would occasionally accuse me of cheating when I set a particularly high score for a week.  In one conversation, she commented that Todd had semi-jokingly suggested that I had written a program to play Bejeweled for me.  I was flattered that he gave me so much credit - I thought that writing such a program would be so tough as to not be worth the effort.  But it got me thinking: could it be done?

Playing Bejeweled for hours a day, 60 seconds at a time, is a great way to waste your life.  But I found that my mind would wander while I played, letting me muse on the activities of the day and look at them from many different perspectives than I usually would.  When I started playing I was coping with a very unpleasant work situation that I hadn't decided how best to change.  As things came to a head about a year ago, I think my time playing Bejeweled after work was therapeutic and productive, letting me find creative ways to deal with some of the less technology-related problems (i.e. politics) that I would not have considered if my mind weren't idle anyway.

Fast forward 8 months or so, and I had no major problems to work out while I played Bejeweled.  My work-related stresses were greatly diminished, and I once more felt like I was contributing my best to a company that appreciated my strengths and gracefully accepted my weaknesses.  That left me with a problem-solving technique in search of a problem.  When Todd commented about writing a program to play Bejeweled automatically, I found my problem.

Stay tuned for Part 2: Defining the Game

21 October 2010

So Much For That Plan

Gold for Cash
In my last post, just two days ago, I briefly outlined my plan for disposing of my GLD Dec calls.  I said that I wanted to hit a price or time target, and when either thing happened I was out.  Of course the very next day gold prices dropped 3%, and then another 2% today, wiping out 20% of the value of my calls.  I'm not quite sure what's going on, but that was outside my comfort zone, and I dumped the calls today for quite a lot less than I planned.  Now that I'm out, I'll detail my price/time limits a little more.

I bought the then-ATM calls over the summer for $5/share of GLD, believing that gold would appreciate in the fall.  Boy did it, and it wasn't long before I was able to sell less than half of them for about $11/share.  That took my initial investment off the table, and I kept the rest riding.  I saw them reach somewhere around $17/share at their high, and I had a price target of $25/share to get out of the rest.  That was pretty aggressive, but I also had a time limit.

Uncomfortable, as I said on Tuesday, with the many small indications of a coming correction in gold, I wanted out soon.  I think most people are idiots (see the CS part of the CS+MACO trade), and when everyone's bullish, it's time to sell.  Worse, literally the whole world is hanging on QE2-related verbiage expected in the minutes from the FOMC's meeting on November 2 & 3.  That economic release is doomed: QE2 is already fully priced in, and all the Fed can do now is disappoint.  At the very least, all the IV comes out of the options after the announcement because the inflection point will have passed.  I definitely wanted out by Nov 2.

I have assumed for quite some time that I am riding a bubble forming in gold, and I swore that unlike the turn-of-the-century tech bubble, I would neither miss the run-up nor hang on for dear life during the pop.  That's why I have been in and out of leveraged gold positions via calls for the last year or so, and that's why I'll get back in after the mid-bubble correction makes everyone hate gold again.  I'm pretty bummed that I gave up so much of my profits by dumping today, but I still made about 150% on the trade since August, so I have no major complaints.

Speaking of CS+MACO...
Adding to the bearish signals this week, AAII published its survey results yesterday after the close: more people are bullish again.  With the CS portion screaming "sell!" and the MACO portion insisting "buy!", CS+MACO is still flat and will stay there for at least another week.

18 October 2010

Assorted Trades

Iron Condor
On Friday, I decided to add a little up-side protection to my December Iron Condor.  I'm trying to act when delta starts getting out of whack, and after a few days of stock market rallies the Dec IC was looking at a delta of about -20.  Sadly I can't be more precise on this because I forgot to jot it down (mental hand-slap).  Anyway, I decided the adjustment that made the most sense was to buy a Dec 760 call.  With my IC strikes at 610/620/770/780, this puts the naked-long call just one strike below my short call.  This adjustment brought my delta up to about +4 as of now, and didn't hurt the theta too much - still nearly 21.  It cost me 9.50, which is a big chunk of change, but I expect it to be the only upside adjustment I'll need to make to this position.

Until I come up with a better solution than Excel, unfortunately I can only display value-at-expiry.  Trust me when I say that current portfolio value is a lot curvier and much more attractive than this.

QQQQ Collar
Also on Friday, my October covered call on QQQQ as part of the collar trade expired in the money and I was assigned on the call.  Pursuant to the rules I set forth in September, I bought QQQQ back this morning at 51.50 and sold calls against it with a strike price of 53 for 56c.  Here are those rules again, since I keep having to search Facebook Notes for the numbers:

1. Monthly calls to be about 3%, and no less than 2.5%, out of the money.
2. 6-month put to be 8% out of the money.
3. No rolling prior to expiry.

Gold Leverage
I am still long-term bullish on gold, and I express that by being long GLD, GDX, and AEM.  I also currently have some Dec calls on GLD that are so profitable that I have sold off enough to cover my original investment and the remainder are worth almost twice what I paid for the whole stack.  Nevertheless, I'm becoming concerned with the borderline irrational expectations for QE2 lately, so I'm ready to take some profits.  I started working a fairly distant sell order on the rest of my GLD calls this morning.  Hopefully it will reach my target price and I'll exit there, but I also have a time limit on this trade; I'll exit when that time limit expires regardless of the price action.

17 October 2010

A Too-Short Weekend

Some friends converged on Chicago for a couple of days of poker and fun this weekend.  Russ came from south Florida to brave the "nice February weather", Emmy sacrificed an important college football game in North Carolina, and Terry brought gifts to commemorate his soon-to-be-champion Oregon Ducks. 

Friday night I hosted the Welcome to Chicago evening, since I live so close to the airport.  Nat came in from Gurnee, bringing his own houseguest - Mike from Germany, who turned out to be a very solid poker player.  We played tournament style first (I came in 2nd to Terry), and then some cash.  Since all of us know each others' play styles so well, it often turns into an over-play contest when we play against each other.  Friday night was no exception, with my personally most embarrassing hand being one in which I bluffed with 8 high into Terry only to discover he had the hand I was pretending to have.  After blowing off an entire buy-in by trying to force the action while card-dead, I rebought and tightened up, able at the end to come back and finish slightly up for the night.

When the night was finally over, Nat and Mike took Terry home with them where he was staying, and Russ and Emmy stayed with us.  The next morning after a quick trip to Panera for some coffee and bagels and a heads-up game with Emmy while Russ got ready to go, we all met at Heaven on Seven for lunch.  We were all due to meet back at Nat's at 7pm for resumption of the night's poker festivities, and everyone was feeling a little iffy from the night before, so we decided to have a laid-back afternoon at my house.

What followed was the bloodiest heads-up shoot-out I think I've ever played in.

We had six people participating, so we drew straws to see who would play in the play-in games, and who would get byes and be able to buy straight in.  Terry and I squared off in one of the play-ins, and Emmy and Nat faced each other in the other.  The best thing I can say about the play-in is that at least by losing it the shoot-out cost me nothing.  Nat won the other a few minutes after Terry beat me, and Emmy and I found ourselves dealing for the next round.  There, Russ beat Terry and Nat made short work of Mike.  The final round was also over in a flash, with Russ taking home the win.  This was practically a replay of our last shoot-out in Atlantic City this spring: Terry beat me in a play-in round, and Russ won the whole enchilada.  At least this time I was smart enough not to put a side-bet on the round with Terry.

Total time for the entire shoot-out: 35 minutes.

Saturday night was an unqualified success.  Nat has a custom poker table in the basement of his new house, and between our out-of-town guests and local poker friends (Marc, James, and Scott all made it), we filled it to capacity.  Since Minnesotans Missy and Todd couldn't work out the logistics of a weekend in Chicago, Nat set up a video Skype call so that they (mostly Missy) could at least enjoy the conversations and atmosphere even though she couldn't play.  Again we started with a tournament (I squeaked past the bubble and finished 3rd to James and Scott), and then moved on to cash.

Nine-handed play encourages a tight style, so to generate a little more action we instituted a 7-2 meta-game.  The way this works is that whenever someone wins with 7-2 - usually with a bluff, obviously - they can show it and have everyone at the table pay them an agreed amount - in this case, $2.  This makes winning with 7-2 much more profitable than simply picking up the pot, especially at a 9-handed table.  As a result, when a player starts betting aggressively it is natural for his opponent(s) to speculate on whether he has a big hand like AA or instead might have 7-2 and be on a big bluff.  When a player folds to 7-2, humiliation ensues from everyone at the table who is annoyed that they have to pay the winner an extra $2.  It's good clean fun.

As I write this, Russ, Emmy, and Terry are scattering back to their respective corners of the country.  For me, at least, this was one of those great weekends that I was sorry to see end today when I dropped Russ and Emmy off at the airport.  I really enjoyed having them stay with us, and I was pleased to finally introduce them to the Bintgoddess and let her see why I count them among some of my closest friends.

13 October 2010

It's Better to be Lucky than Good

About time for a post about poker, ain't it?

Played a little online today to get me in the mood for this weekend when the poker pals from out of town invade.  Folded seemingly forever in a 9-player sit-n-go while I played some shallow cash.  Doubled up in the cash game, which was nice but unexciting (QQ vs JJ all-in preflop and they held up).  About the time the cash game started drying up, I noticed we were down to 5 in the sit-n-go; so I banked the cash and concentrated.  By the way, these 9-player games pay the top 3 players.

With a stack of 1400 chips and blinds at 60/120, I shouldn't have been in a huge hurry to leave.  But for some reason I was, because when the guy under the gun min-raised, I decided to push with ATo.  He actually thought about it before he called with AQo, which was even more embarrassing than an insta-call.  His dominating hand held up, and I was down to 35 chips.  And now I was under the gun.

All-in with any two cards, now.

Next hand: J3o and my Jack kicker won me 105.
Next hand: all-in blind with Q4s and I hit a full house for 270.

This was when I offered the other players a deal.  Nobody seemed that interested, and the 6200-chip big stack (we'll call him/her Sandy) actually tossed out a "lol" at me.  How rude.

Next hand: A9o on the small blind and a river 9 won me 660.

Whew!  Through the blinds with 660 chips.  Nice!

I had enough I could fold looking for connectors, an ace, or big cards.  So I did, treading water for 14 hands.  Got two walks along the way, which were very nice surprises, but still had to pay some blinds, so after folding for a while I was down to 420 on the button when I found A2o.  First ace I'd seen in 14 hands, we were down to 4 players now, and the blinds were up to 80/160.  Easy decision when the guy on my right folded to me.  Sandy was up to 8600, having taken out the other player.  Flopped a 2 to double up to 920.

Another fold.

Next hand, shoved in on my own big blind with A9o after Sandy-the-big-stack limped on the button.  He/she actually folded, so now I had 1160 without a contest.

Folded my K9 on the small blind, giving the big blind a walk.
All-in on the button with ATo, Sandy called with 69, and I won 2240.
Another fold.
Another walk.
Another fold on the small blind.

Sandy was nice enough to double up the guy on my right, but he/she was still the chip lead with just under 5000 chips.  I stayed out of that one.

With the blinds up to 100/200, I decided to try to steal with A6o.  I raised it up to 600 and Sandy min-raised me from the small blind (big blind folded).  Hmm.  400 to call, 1800 to win... sure, OK.  I called, and the flop came 865.  Sandy insta-shoved, and it felt like an ace to me.  Besides, I only had 1240 chips left, and with his/her bet the pot was 3440.  Fine.  Sandy flips AK and my flopped suck-out holds up.

Here I was the chip-lead 27 hands after having only 35 chips. 

Five hands later I won another 4300-chip pot from Sandy, leaving him/her with 920 chips and solidifying my lead up to just under 7000.

Two hands later, Sandy went out 4th, bringing me up to 8500.  That's what s/he gets for "lol"ing at my gracious deal offer.

I think I'm ready for the weekend.

12 October 2010

Rsync Magic

I do not pretend to be anything other than an rsync tourist.  But I have had need of its services on two occasions, today being the latest, and I found it frustrating how difficult it is to get simple and clear information on how to set up its filtering to get all the files you want and nothing else.  Both times I've needed it, I've had to do a lot of relearning and searching.

I won't be solving the world's rsync problems on this blog.  But if nothing else I'll have another page to find on the internet the next time I need rsync, and with any luck it will answer my questions.  So here are the two ways I've used rsync, and how I set it up.

Task #1: Back up certain files to the NAS
The McHouse Enterprise Computing Cluster includes a machine running FreeNAS where I store all of our backups, MP3s, ISO images, and other assorted stuff I or the bintgoddess might need in our constant quest for entertainment.  I use Bacula to do full hard drive backups of the Windows 7 laptop she uses at school, her Windows XP desktop, and my Ubuntu desktop.  This is great, but certain files need more frequent backups than this, such as my machine's home directory. 

To do this, first I enabled the rsync service in FreeNAS.  Within FreeNAS' configuration web page, I set up an rsync path called HildeMark for my home directory.  In the image below, I've cropped off the stuff I didn't change.

Settings for the home directory rsync path

Then on my machine, I added the following command to my crontab to run at 4:15pm Monday-Friday:
/usr/bin/rsync -aFx --delete /home/mark/ kinakuta::HildeMark

For space considerations, refer to the rsync manpage for the precise meanings of each of these parameters.  In a nutshell, however, the contents of HildeMark's physical location should look exactly like my home directory right after the transfer completes.

But then I noticed that I was copying a bunch of stuff I didn't want, like gigantic source trees from work - those are backed up at the office, no need for me to do it again here - and Firefox's cache, which likewise doesn't need backing up.  That's where the .rsync-filter file comes in.  In each directory traversed by rsync, you can optionally create a file called .rsync-filter containing exclusions of things that should not be backed up at or below the current level.  For example, my very simple ~/.mozillia/.rsync-filter excludes Firefox's cache from the backup:
exclude Cache/
Similarly the .rsync-filter in my ~/src directory excludes anything that is source-controlled, since the SVN server's storage is already backed up and that's the master copy.  I can adjust any directory's .rsync-filter without having to worry about whether my changes have unintended side-effects on some other directory - it only affects file selection within that directory sub-tree.

Task #2: Roll Out Updates to a Customer
I just started a new project at the office wherein I am writing library code to be immediately used by another developer.  For reasons upon which I cannot elaborate, this developer cannot simply be given SVN access to the full source code tree.  He gets headers and libraries only for the things he needs to build.  Everything else is held back.  Additionally, since he will be developing against library code that I am actively writing, I can't just let him have whatever file I just saved... I need to at least make sure it compiles before I give it to him.  Likewise, he needs to control when he updates his copy of my library so that he can reach a good stopping place in his own code before having the library change on him.

This guy is basically a customer, so I need a staging area where I place the files he is allowed to see.  I do this when it is appropriate, and then when he's ready he copies everything from that staging area to his own development machine.  Most times, a small subset of files - or a small portion of a file - have changed.

Again I started by setting up an rsync daemon, but this time I didn't have the luxury of using FreeNAS' administrative web page.  On my development workstation, I created the following configuration in /etc/rsync.config:
lock file = /var/run/rsync.lock
log file = /var/log/rsync.log

[snapshot]
path = /staging
uid = nobody
gid = nobody
read only = yes
list = yes
hosts allow = 172.0.0.99
This creates a visible path called "snapshot" with storage at /staging.  I made it read-only since there is no reason for him to upload to me, and gave his IP address access to it without him needing a password to my machine.  The rest is pretty boilerplate.

Next I started rsync in daemon-mode with the logical syntax of:  rsync --daemon

To copy into this location, I needed to reproduce my source tree but redact those things that he didn't need - namely unrelated modules and all C++ implementation files.  The paths to various headers should not change so that cascading #include directives continue to work nicely.  I also needed to take all the libraries scattered throughout my source tree and assemble them in a single directory to make it easier for him to link to them.  To do these two things, I wrote a little script:
#!/bin/bash
rsync --progress --stats --recursive --times --delete \
      --exclude-from=$HOME/rsync/filter.txt \

      $HOME/src/proj1/source/ /staging/source
find $HOME/src/proj1/source -name "*.a" -exec cp -p {} /staging/lib/ \;
The rsync command gives me progress and statistics, deleting any files from the target directory that I remove from my source directory, and filtering based on the rules in ~/rsync/filter.txt.  I'll get to that in a moment.

The find command returns all the files ending in ".a" under ~/src/proj1/source, which happen to be exactly the static libraries I want him to be able to link against.  Using find's -exec syntax, it copies each file to /staging/lib while preserving the modified date/time.  This is important, because otherwise when I run this script it will make it appear as though I changed all the libraries when maybe I only changed one or two.

And here's a representation of the filter.txt file with some details removed for confidentiality.
- **/.svn*
- CMakeFiles*
- /build
- /secret1
- /Libs/secretLib
- /secret2
+ /examples/example1/main.cpp
+ /Tools/tool1/*.cpp
+ /Tools/toolsuite1/*/*.cpp
+ **/
+ **/*.h
+ **/*.inl
- *
  • Leading forward-slashes (/) refer to the top of the source directory, not the top of the volume.  So "/build" really means "~/src/proj1/source/build".
  • "-" at the beginning of the line means "exclude stuff matching this pattern."
  • "+" at the beginning means "include stuff matching this pattern."
  • The "**" token means: "match against every subdirectory".  So "- **/.svn*" will exclude x/.svn-info, x/y/.svn-stuff, and x/y/z/.svn.
  • "- CMakeFiles*" excludes all files and directories starting with "CMakeFiles" - this is where a lot of the temporary build scripts go, so this line removes a ton of useless gunk.
  • The next four exclusion lines keep rsync from descending into the named directories.  It is free to descend into Libs, just not Libs/secretLib.
  • The next two lines explicitly add some C++ implementation files he's allowed to have and might find useful.  The third inclusion line adds C++ implementation files from all the directories within toolsuite1.
  • "+ **/" is a catch-all rule to say: descend into every directory from here if you haven't matched an exclusion rule yet.  It only matches directories, so it controls where files are taken from, not which files.
  • "+ **/*.h" and "+ **/*.inl" explicitly specify that all files ending in "*.h" and "*.inl" in all subdirectories (if not previously excluded) should get copied.
  • "- *" means "and nothing else".
Whew!  That's only half the job: getting the files from my work area to the staging area when I deem them ready for roll-out.  Thankfully the other half of the job is a single command, and much simpler.

On the other developer's computer, he executes the following to copy everything down from my staging area to his library area:
rsync --progress --delete --recursive dev-mark1::snapshot ~/src/markLib/
The dev-mark1::snapshot notation, which is in the form machine::path, is interesting.  The double-colon (::) indicates that his rsync client should attempt to connect to a remote rsync daemon running on dev-mark1, and then request files from its snapshot path.  Since I set snapshot up to point to my staging area, this gives him two directories under ~/src/markLib/: source/ and lib/.  Source contains a full source tree with only headers in it, and lib contains all the libraries I copied into it using find.

Now whenever I feel like sending out a mini-release, I run my script.  Whenever he wants to check for a mini-release, he runs his rsync command.  Rsync and find take care of the rest.  Voila!

10 October 2010

Hard Drive Exhumations

I have a lot of hard drives lying around, because no matter what happens to the other parts from a decommissioned computer, the hard drive always gets stored.  There's too much sensitive data on there for me to let anyone else get their hands on it, and for many years I have been stashing hard drives in the basement with the intention of "one day" zeroing out all the sectors and disposing of them... somehow.

Today I decided to take an inventory, and it was an interesting-but-dusty trip through the last 13 years of building and destroying the computers that live in this house.  I have no fewer than 19 decommissioned hard drives that I know of.  Here are a few of the more dramatic highlights:

  • There are two 250GB Western Digitals that I pulled out of the LaCie BigDisk external USB drive enclosure after it died.  I bought those back in the halcyon days of Mad Dog Software as a backup solution, because I could fit the contents of all the computers in my house on just one of them.  The intention was to get two of those enclosures and keep one off-site.  That fizzled, but it did give me an excuse to write a trickle-transfer utility to run on Linux and send backup files to Dan's house for storage.  That was my first serious Linux project.
  • There are three 60GB Maxtors of which I have no guess the purpose... but one of them has a post-it note that reads "noisy" in Diane's handwriting.  I can only imagine how all this came about.
  • There are four 60GB IBM Deskstars dating back from the dead-drive catastrophe of 2001 (or was it 2000? not sure now).  I decided to build the Mother of All Machines, so I bought four of these hot-running high-failure-rate data bombs and put them in RAID 0+1.  Maximum PC rated them Kick-Ass at the time, so they seemed like the best choice.  Little did I know I had just purchased drives that would be the subject of a class-action lawsuit a few years later.  The thing about RAID 0+1 is that, in theory, it has the speed of RAID 0 (striping) with the data safety of RAID 1 (mirroring).  But in practice, it isn't that fast and it isn't that safe.  Especially when you have two drives fail simultaneously, like I did.  The four I have now include two warranty replacement drives, so I think they all work.
  • Last spring I built my grandmother a new PC from leftover parts I had lying around, replacing one that was refusing to boot.  I took the dead hardware with the plan of figuring out what the problem was, but didn't even crack the case until today.  Inside I found three hard drives.  They proved to be resistant to my charms and I was in a hurry because I was expecting a phone call.  Can't wait to get back to them and see what sort of complicated storage scheme was dreamed up on those.
  • Buried deeply in the random-hardware boxes and covered in dust, I found a 6GB (yes, six gigabytes) Western Digital from 1998, and a circa-1997 Fujitsu that gave no size nor cylinder information whatsoever.  Wow.
Sitting here next to my desk is a fully functional Windows PC of fairly recent vintage that has no keyboard, monitor, or mouse - just power and network.  I know I had a reason for not moving this thing down to the basement, but I couldn't say what that reason was.  In any case, I'll use that machine to clean these hard drives one at a time over the next million years or so, all so that I can safely dispose of them.  I may dual-boot it to Linux first, just for fun, I don't know.

Diane suggested I look at FreeCycle as a way to give away stuff to people who want it... I can't wait to see who wants a 10GB IBM OEM hard drive from 1998.

08 October 2010

Well That's a Bummer

The first negative experience with Think or Swim I've had:

Hello - 

I use your trading platform under paperMoney to test out trades, and I find it extremely valuable.  It has taken me far longer than I expected to approach the stage of being able to fund my account and start trading with real funds, but this is not a reflection on your software at all. On the contrary, I have been consistently impressed with its high level of quality and richness of features.

I would like to kindly request your permission to include a screenshot from your Analyze tab on my new blog from time to time.  I am not sure what restrictions I might violate by doing this without permission, so I felt it would be better to ask first.  Here's a link if you want to see what I've said about thinkorswim so far: 
http://riskofruin.markmccracken.net

I am not writing reviews or detailed descriptions of your software (if I were they would be positive!) - rather I feel a value-at-expiration graph would be much clearer than a bunch of prose describing my option research positions.  Without your permission I can muddle through with Excel, but that is a distant second choice.

Thanks in advance,

Mark McCracken

Mark,

Thank you for the reach out and the kudos.  Unfortunately, we must respectfully deny your request to publish our firm’s copyrighted materials.  This ability is strictly reserved for those companies/individuals with which tos has a Marketing Agreement in place.

We wish you the best of luck in your trading and your endeavor.

Sincerely,
Scott Garland
Scott Garland
Compliance Manager

Scott -

How might I go about securing a Marketing Agreement with ToS?

- Mark McCracken

Mark,

With the recent acquisition by TD Ameritrade all Marketing Agreement activity has been frozen at this time. Sorry.

Sincerely,
Scott Garland
Scott Garland
Compliance Manager

Well, Scott Garland and Scott Garland have spoken (twice each).  No screenshots on the blog.  That's going to make it more difficult to illustrate my point, and it means no free marketing for Think or Swim, but I understand that there's no such thing as good publicity, or something like that.

06 October 2010

December Iron Condor

In another of my paperMoney trades, I experiment with iron condors.  Today I opened a position on my next month's iron condor, expiring in December, on RUT.  RUT is the Russell 2000 index, and options on it are European-style and cash-settled.  This means they cannot be exercised early (very important for spreading), and in-the-money options at expiry won't cause securities to change hands - just money.  Settlement at expiry is weird, though, so it's best not to take them to expiry in any case.

WTF is an Iron Condor?
An iron condor is a market-neutral option strategy that is short volatility but with limited profit/loss ranges.  It consists of two vertical spreads: a put spread below the current index price, and a call spread above the current index price.  The long options in the spreads are both farther OTM than the short options, so opening an iron condor position generates a credit.  The farther apart the short option strikes are from each other, the lower the risk that the iron condor will lose money, but the less credit it generates on opening.

A picture is worth a thousand words.  Luckily for you, I have both.  Check out this page from Option Trading Tips:  Iron Condor Description.  I'm working on getting permission from ThinkOrSwim to include screen shots from their software.  In the meantime, this is the best I can do, sorry.

Terminology does not agree on how to refer to iron condors that generate a credit when opened.  They consist of two short vertical spreads, but many (including the website above) call that combination a Long Condor.  To me, selling means that I get money; buying means that I give up money.  So throughout this blog I will rightly or wrongly refer to iron condors like they're short: I sell them to open them and I buy them to get out.  So today I sold an iron condor, opening a short position, and I generated cash.  Questions? No? Excellent.

Where To Begin...
Here I have to give Mark Wolfinger props again, because about a year ago I looked at iron condors briefly when a co-worker (not a professional trader, in this case) told me about how he was making a guaranteed 10%/month on them.  This seemed too good to be true, and after analyzing them a little I decided that it was: the probability-weighted return on his capital was far too low for the risk of ruin he was taking.  I dismissed iron condors as hardly better than naked option selling, and was ready to leave it at that.  In the process, however, I ran into Mark Wolfinger's blog Options for Rookies, and I started reading it regularly.  Over the next few months I realized that there was more to iron condor trading than I first assumed.  Guaranteeing 10%/month was indeed too good to be true, as I suspected.  But there was nevertheless a viable trade there for someone willing to put in the time and effort to build experience.  A firm believer that nothing worth doing is easy, I set out to learn.  I'm just getting started on that journey, and though it will never end, I hope that soon I will have made enough progress to begin profiting from it.  I don't know when that will be, but I know it isn't now yet.

I've followed MW's lead in a lot of respects, because I am more of a learn-by-doer than a learn-by-reader.  As I try different approaches and find my own comfort zones and style, I start to diverge from him; this is natural.  But some aspects of his trade are relatively arbitrary from my perspective:  he trades RUT because he feels that its volatility is not-too-high but not-too-small; he trades options with 60+ days to expiry because he feels that is the right mix of risk (gamma) and reward (theta).  Never having traded iron condors on any index, and never gotten burned in either direction in time-to-expiry, I figured 60+ days on RUT was as good a place to start as any.

My Own Trading Style
My current behavior pattern is to start looking for a new iron condor position around the first of the month two months before expiry.  This gives me 60-80 days or so before expiry.  Also like Mark, I look to get out of the condor early if the market is willing to let me buy back pieces of it at good prices.  I don't try to choose a low-risk / low-reward condor that I never have to adjust, but I try to give it enough room to move that I can make adjustment decisions after work for trading on the open the next day.  Taking some of his lessons to heart, I try not to increase my position in the course of adjustments; however, I will do so if I have previously reduced the position via cheap buy-backs.  I try very hard to evaluate what the position is now, instead of whether I'm up or down from my entry point.  This is a lot harder than it sounds, but Mark harps on it so much that it is starting to sink in.

In Theory, There Is No Difference Between Theory and Practice
A perfect situation in my trading style is to find a new iron condor on, say, October 1 for December expiry that I can put on generating 3.50 or so in premium while keeping the two short options a good 15-20 strikes apart.  For this situation to remain perfect, the market needs to move up and down some so I can cheaply (like 20c or so) get out of the two spread legs, but not so much that I feel I need to adjust to protect my position.  The perfect scenario ends about 30 days before expiry when I exit the last position without ever having to adjust.  Net profit when perfect: nearly $3.00 per contract, or about 30% on margin risked.

But In Practice, There Is
In reality, that scenario never happens.  I always have to adjust, I always agonize over how much insurance to buy and when, I seldom pay as little as 20c to buy back my spreads, I frequently enter the position for less than 3.50 credit, and I often find myself still trying to dump some position off with only 2 weeks to go.

I often have two condors on at any given time: one that I'm adjusting and working my way out of, and one that I'm watching eat up theta prior to its first adjustment.  If I end up with over 1.00 per original contract profit, I'm thrilled. Note that because of adjustments, 1.00 per original contract is a lot less than 10% margin profit, because the margin gets bigger and the profits get smaller with insurance.  If my net cash flows are positive at the end of a condor run, I'm satisfied.  If I learn something along the way, it's all worth it.

I'm slowly starting to get a feel for what values of delta make me nervous, and I'm better at choosing adjustments that don't give me a negative theta, since that would negate the whole purpose.  I'm always massively short vega, since that's the nature of an iron condor; and gamma doesn't really affect me too much 60 days out.  It is nevertheless always the shadow in the corner, and I keep an eye on it more and more the closer to expiry I find myself.  Experience has come very slowly, but it is starting to click.  That's a cool feeling.

Current Situation
Right now I have a heavily-adjusted November position on.  It's too complicated to explain without charts, so I won't try.  But despite the drop in volatility the past couple of days as the market rallied, I was able to put on my December iron condor position for my target price of 3.50.  It's a little tighter (short strikes are closer together) than some previous months, but I'm also getting a little more comfortable with adjustments; this lets me generate more premium credit at the start without so much fear.  My new RUT December condor is a 610/620/770/780, meaning that I am long the 610 puts and the 780 calls, and short the 620 puts and 770 calls.  Max profit: the 3.50 credit it generated.  Max loss: 6.50.

05 October 2010

I Spy a Crossover

I'm running a mechanical trade in paperMoney on SPY that is based on Simple-Moving-Average Crossovers.  I just started running this trade, but I did a little back-of-the-envelope backtesting before I started and I really liked the way it performed over the last couple of years.  Since the 25-day SMA crossed the 200-day SMA to the upside today, it bears mentioning.

There are two competing indicators in this trade: moving average crossovers (MACO) and individual investment sentiment, which I use as a contrary indicator (CS).

Moving Average Crossovers
MACO is bullish when the SPY daily closing price is higher than the SPY 25-day SMA, which in turn is higher than the SPY 200-day SMA.  MACO is bearish in the opposite situation: when SPY closes below the 25-day SMA, which in turn is below the 200-day SMA.  In any other closing price configuration, MACO is neutral/flat.

If this was where it ended, this would be a classic long-term trend-following trade.  It would have killed in 2009, and been killed in 2010.

Contrary Sentiment
The American Association of Individual Investors publishes a set of weekly indicators based on surveys of their members.  They give the percentage of their responding members that are bullish, bearish, and neutral.  I have arbitrarily chosen the bullish indicator, and I use the prior calendar year's average value as a midpoint - this year, that midpoint is 36.8%.  I then set the entry lines 10% above and below that value.  I get a new value from AAII every Wednesday, when they publish the survey.

CS is bullish when the surveyed value is below (yes, below) the low entry line, bearish when the surveyed value is above (yes, above) the high entry line, and signals "exit" when the surveyed value crosses the midpoint.  I basically use it to fade individual investor sentiment, because I think most people are morons - especially those who spend money on a membership to a website so they can donate their time filling out surveys.

So when that bullish indicator is above 46.8%, CS will initiate a "short" signal, remaining in "short" state until the indicator drops below 36.8%.  When the bullish indicator is below 26.8%, CS will initiate a "long" signal, remaining in "long" state until the indicator rises above 36.8%.  I'm trying to only place bets against other investors when it is more or less universally agreed upon how great/shitty the world is.

As a momentum-fading indicator, CS kills in sideways markets like most of 2010 has been.  It gets killed in trending markets like 2009, where everyone got really excited and stayed really excited while the stock market rallied a gazillion points for no reason.

Putting it all together
Now I aggregate the signals thus:
  • If both CS and MACO say "flat/neutral", my position is flat
  • If CS and MACO disagree (long/short or short/long), my position is flat
  • If CS and MACO agree on a position (rare), I take that position
  • If one says "long" and the other "neutral", I'm long (but see below)
  • If one says "short" and the other "neutral", I'm short (but see below)
  • If there was a disagreement (long/short, short/long), and MACO goes to neutral/flat, I do not initiate a position until the next CS survey release is in the "initiate" zones.  I do not "back into" positions.
  • I only use closing prices for the MACO portion, and I trade the next day on the open.  If SPY gaps back through into neutral territory before the open, I treat it as no signal.  This basically just makes the backtesting easier.
In my backtesting, I compared various combinations of CS and MACO to a simple "buy and forget" strategy, resetting the entry price on 1 January each year.  I found that CS tended to keep MACO out of trouble by catching the tops and bottoms of the market trends really nicely.  On the other hand, MACO would keep CS from gritting its teeth and fading a long-term trend for a huge loss.  In fact, as a long-term trend asserted itself, CS would gradually drift into neutral territory, allowing MACO to get a position on and chase the trend.

The combination I describe above didn't consistently beat the "buy and forget" strategy, but: (a) it was a lot more fun; (b) buy-and-forget is what we all already do in our 401(k)s anyway - this whole trade is a diversification, in my opinion.  And, honestly, it has beat the snot out of "buy and forget" so far this year.

Where are we now?
The last entry signal in CS was "short" on 16 September, when the survey came out 50.89% bullish.  It has since drifted lower.  The most recent survey on 30 September was 42.5%, which would not cause a new position, but it remains "short" because we haven't gone through 36.8% yet.  We get a new survey tomorrow, and I'll go out on a limb and predict that it will remain above 36.8%.  In fact, for double-or-nothing I'll predict an up-tick from last week.

MACO, on the other hand, has been flat/neutral since 2 September, when SPY closed at 109.47: above the 25-SMA of 109.09 but below the 200-SMA of 111.79.  SPY has been trading above both of its moving averages since gapping higher over the weekend before 13 September, and today it finally dragged the 25-SMA higher than the 200-SMA at the close, generating a "long" signal:
  • SPY Close: 116.04
  • 25-day moving average: 112.46
  • 200-day moving average: 112.05
With MACO transitioning from "neutral/flat" to "long", there is a disagreement so the trade is flat.

04 October 2010

A Little Free Advertising

I think some background might be useful before I jump into trade journal activities.  Most of the trades I will describe on this blog are being done in Think or Swim's paperMoney platform.  A few might be done with real money, and I hope that someday the realMoney/paperMoney ratio will increase.  But I have no intention to specify which ones are real and which ones are fake: my actual personal trading activities in the real market risking real capital are not something I want to put on the internet.  Likewise I don't plan to be very specific about position sizes or prices except where they are necessary to understand what I'm doing.  There also won't be profit/loss numbers.

There are two big reasons for not being very specific about these things.  The primary one is privacy: if I talk about my trading sizes, profit/loss, or which trades are real or fake, I give away personal financial information.  Additionally, though, I don't want anyone mimicking my trades.  If I wanted to be an investment advisor I would go off and get certified, and make a lot of money doing that.  Trades described in this blog are intended to be general ideas open for discussion, and they are certainly not recommendations or advice.  See that little disclaimer right under the title bar?  Yeah.  So if you're looking for stock tips, picks, predictions, or strategies, move along now and don't come back.  If you want to read about my own personal thrills and spills in the marketplace and interact with me about what I learn along the way, welcome.

In any case, assume that all positions are held in my paperMoney account (not real money).

So here's a little commentary about this thing called paperMoney, of which I am a huge fan.  Think or Swim has an interactive trading front-end written in Java.  This is great for me because I made the Windows-to-Linux switch about 18 months ago and I get kind of pissed off when I have to run a VM just to run a piece of software.  ToS's front-end is fully featured, providing charts, stock screening, real-time news feeds, trading grids, account/position management information, etc.  You hook it up to your trading account at thinkorswim.com and you're good to go: any trade you do goes against your buying power in the account and shows up both on your statements and in the front-end.

When you first connect, you choose between realMoney and paperMoney.  I have personally never used ToS's front-end for real-money trading - only paperMoney.  But from what I understand, paperMoney is exactly the same software except for two very important features: 1) trades in paperMoney don't actually make or lose you real money; and 2) market prices seen in the front-end under paperMoney are 20 minutes behind.  I'm sure that ToS does this because of republishing and licensing agreements with the exchanges providing the market data in the first place.  Another minor difference is that you start with $100k in your paperMoney account - I have no idea what you do if you go broke and hopefully I won't find out - so there is no depositing to do.  And execution is occasionally a little strange: ToS fills your limit order based on mid-prices instead of actual price action.  This is the best of a bunch of compromise approaches, in my opinion.  But you do sometimes get kind of a weird fill.  On May 6 (Flash Crash day), I had some limit orders working to exit some positions at ridiculous prices just so that I wouldn't forget about them, and they got filled at even better prices than I had specified.  What should have had a max-$2000 profit based on the option strategy ended up netting me $25k.  If only it was real...

The front-end is really well-tailored to options trading, which is why I selected it in the first place.  One of the screens shows position valuation graphs that can be played around with to examine the effects of underlying changes, delta changes, time, vega, etc etc.  Simulated trades can also be applied to positions from there so that an informed decision can be made before submitting the order.  I spend a lot of time on that screen before making adjustments.

I'm not sure how protective TD Ameritrade (owners of Think or Swim) are about screenshots and whatnot, so I won't post any here.  But check out thinkorswim.com and read all about it, if you haven't ever looked at their platform.  I'm really impressed with the software for having most of what I want in it, and I'm also really impressed at their willingness to let me paper-trade indefinitely without ever depositing any money.  That sort of accommodation shows confidence that their software is so good that I will still want to use it when/if I transition to a real-money option trader.  And that, my friends, is rare.

I sold some December 2010 calls on GLD today, taking my initial investment off the table.  My remaining position is all profit.  I did this today because of the fantastic run-up GLD has had over the past two months; some consolidation is due, and maybe a correction, so it seems like a good idea to reduce my risk and lock in a floor on my return.  Another reason is that the trader that sits next to me at work (we'll call him NeighborTrader, or NT) reported this morning that when he loaded up yahoo.com he noticed that the phrase "gold prices" was at the top of the Trending Now list.  That's a sign of a short-term top if I ever heard one.  It's a good time to hold a call option on my call position.

When NT's Iowa-residing grandfather asks about investing in gold, I'll sell the rest.

03 October 2010

So Much Pressure

The first post in my new blog.  The first post in my first blog.  God.

For the last year or so, I have been writing up my investment activities in Notes and posting them on Facebook for my friends and family to read and comment on.  I did this hoping to spark a few lively discussions and free exchanges of ideas, to keep myself honest, and to share some of the knowledge I have gained while working in the futures trading industry for 13 years.  Two out of three isn't bad.

I got some good questions asking about this or that, but very little in the way of idea exchange.  The big success story was in how I approached my trades after committing to explaining them to others first.  OK, "first" didn't always happen, so there was often a fair amount of post-trade rationalizing going on; but there were several times throughout the year when I decided to make a [poor] trade but then abandoned it because I couldn't think of a clear way to explain how it might be successful.  Every saved loss is a win.

Ultimately, though, I found the Facebook Note mechanism unsatisfying.  I started following Mark Wolfinger's excellent blog Options for Rookies, had some very interesting discussions with co-workers about option-spread trading, and began playing around with Think or Swim's trading front-end using a paperMoney account to try out ideas without throwing real money away to do it.  As I spent more and more energy learning about credit spreads and option position management, I started contemplating discussing these topics on Facebook, to my friends and family.  I was certain I would immediately confuse them, and I might as well just write my thoughts in a trading journal and keep it on a shelf.

Although trading journals are very valuable, they don't come up with ideas on their own.  I realize that for the indefinite future I can expect exactly two regular readers of this blog - my wife and my mother - but nevertheless the possibility exists that someone will stumble upon this public journal, read it, learn from it, and respond with ideas of their own.  That would be cool.

I also want the freedom to talk about stuff other than trading.  Under my Notes pattern, I felt like I had specified my topics in advance and changing them would make things too incongruous.  Beyond investing and trading, I hope a regular reader will find interesting commentary from me on poker, software development, life in Chicago, skiing, and many other things that pop into my head.  If not, then at least I'll have recorded my thoughts somewhere and stressed out about my grammar from time to time.  That's important, too.

I chose the name Risk of Ruin because I am a big fan of the concept.  I think it appears everywhere in life: it's why you buy insurance, wear your seatbelt, favor crash-tested cars, and avoid restaurants where your friend got food poisoning once.  It is the single biggest force in a poker tournament, one that must be simultaneously defended against and wielded in order to succeed over the long term; it is the primary deciding factor on how "big" a game to play; it is the bogeyman that good bankroll management practices are designed to avoid.  It controls all trade-sizing decisions more than anything else; it is the entire reason why you should diversify your investments.  It explains why people who have never skied before think we're all crazy; experiencing its touch is also why we love skiing so much.

Risk of ruin, to me, is more than its definition with respect to trading or poker.  It is the flat line on the bottom-right of every net worth graph and EKG display.  It is the poison that must be drunk in order to participate in life's game.  It is the femme fatale flirting with you in front of your wife.  It is the Queen of Spades in a game of Hearts, helping you win or making you lose depending on your skill and a whole lot of chance.  It's Gravity.

OK one more cheesy example because I love it so much.  This will also establish some geek cred.

In the Star Trek TNG episode "Tapestry" (wiki), Captain Picard dies because a minor phaser blast damages his artificial heart.  Q catches him in the afterlife and sends him back in time to avoid the bar brawl in his youth that ended with him getting stabbed in his natural heart and needing the replacement.  Doing so, Q argues, will ensure that Picard survives this present-day minor skirmish.  So Picard goes back and avoids taking the risk of the bar fight.  It turns out that this scene was an inflection point in young Picard's life: after avoiding that risk of death (aka ruin), he finds himself always taking the risk-averse paths, unable to accept that an uncommon life requires uncommon risks.  Popping back to present day, we discover aging Lieutenant Picard is a pathetic little man with little dreams.  After several depressing scenes of mediocrity, the hero within demands a repeat audience with Q.  Eventually, Picard chooses certain death after a hero's life instead of an extended but unsatisfying existence.

In the real world, we don't get to go back in time and approach our actions with the certainty of known outcomes.  This makes it a lot more exciting, don't you think?