24 September, 2012

Now for something completely different

I want to take a break from all these heavy numbers and statistics to talk about video games for a second.  What I'm talking about may be yesterday's news, but... shut up.  There's enough news here to justify talking about it.

X-COM: Enemy Unknown

In X-COM: Enemy Unknown, you are placed in command of an elite agency of soldiers and scientists tasked with protecting the world from an invasion by an unknown alien menace.  This game was announced about 8 months ago, and caused the internet collectively shit it's pants.   The original X-COM (tagline: "UFO Defense" or "Enemy Unknown", depending on where in the world you were when it came out) was released originally by Microprose in 1994,  almost two decades ago.  Since then, it has been widely hailed as one of the best games of all time, and I strongly agree.  As has happened with so many of our favorite IPs In the last 10 years, X-COM has been "reimagined" by a new developer: Firaxis.

Important note: this is not the other rebooted X-COM game under development at 2K Marin.

The game is due out on 9.Oct in the NAm (12. Oct elsewhere), and Firaxis just released a playable demo today on Steam.

After playing the demo, any doubts that this game could be anything less be amazing died faster than a panicking rookie armed with a 9mm pistol.  

Like the original, the game is comprised of a tactical and strategic portion.  During the games tactical engagements, you are responsible for using your squad of soldier to eliminate the alien threat in an urban or natural environment.  In the strategic portion, you make decisions about funding allocation, research, soldier load-out and promotion, etc.

The demo consists of 2 tactical missions, and the chance to make some meaningless research decisions and promote a unit back at the base.  We get to see some basic interfaces and systems that are critical to playing the game.  Just enough to whet our appetites, and make us froth at the mouth like rabid badgers for two weeks while we wait for damn thing to be released for reallies.  

Even though it's just a tutorial, the first mission perfectly encapsulates the "X-COM Experience": Brutality.  I won't give away what that actually means; it would be unfair of me to ruin it for the uninitiated, but veterans of the original can guess what happens.  The effect is blunted a little bit because, like so many other tutorials, it tells you/forces your troop movements, including a move no veteran would ever make.  But younger players will happily march right in with out thinking.

The second mission gives you complete control except for the first few moves.  It introduces some of the aliens you'll be fighting as you try to defend the earth in the full game.  I lost my assault trooper in one round, even though I was being careful, and tossed a smoke grenade, but my other guys made it out w/o a scratch. I can only hope the rest of my troops will be able to be so lucky.

Anyway, make no mistake, I think this will be the gotta-have-it game of the year.


Faster-Than-Light

Faster-Than-Light, or FTL, is an crowd-funded indy game I picked up late last week via a sale somewhere (can't remember) that was released earlier this month.  The game is considered to be "Rogue-like" in that it is a top-down view of several rooms.  While they are very clean and refined, the graphics look like they would be at home on a 486 DX2 and a 800 x 600 monitor, but don't let that fool you; This game is a perfect example of why game budgets should concentrate on design, not graphics.

You are given command of a crew and starship and are tasked with delivering the plans for a super-weapon to your commanders 8 sectors away.  The primary interface is a top down view of the ship (This supposedly makes it "rogue-like"), which is divided into rooms/compartments, which your crew-members occupy.  Most compartments contain vital ship systems (weapons, helm, shields, engines, etc). Crew members occupying these compartments interact with these systems to give them small boosts, repair them, or receive bonuses from them.  You have direct control over the ship's navigation, energy distribution, modification, and attack strategy, in addition to control over where your crew members are stationed.  

The game play is really very simple and quick, but the real meat of the game is tactical and strategic decisions that lie just below the surface.  The different play styles that are possible given variety of available ship modifications are extremely diverse.  You are able to attack enemy ships with weapons, employ a variety of drones, teleport your crew onto enemy ships to sabotage systems or their kill crew, or FTL jump away from combat if you find yourself at a disadvantage.  At the same time you will have to protect your ship from enemy weapons fire, fend off enemy boarding actions, repair hull breaches, and avoid asteroids and solar flares.  

The game's difficulty has been well-tuned to be very difficult, but not impossible, even on the "easy" setting.  You don't know what you're getting into with each encounter until you're deep in it.  There's also no "reload last save" option; when you die (all your crew are dead, or the ship explodes), you failed, and have to start over from the beginning.  Game over.  This means a lot of play-throughs end well before the ship reaches it's destination.  But given the wide variety of play options available, restarting isn't all that bad.  You can take what you use what you learned in the previous attempt to make it a little further.

If you are able to survive all of your encounters, you will be "rewarded" with a boss to defeat in the final sector.  I almost shit a brick when I discovered I would have to fight the boss not once, but three times.  Each time it had a new configuration that required an entirely different approach to be successful.  Out of 20-30 play-throughs on easy, I have beat the game once.  And about a 1/2 second after I scored the final blow on the boss, my ship took a fatal blow.  But when I saw I had won after all, I was stoked!

This game is not going be for everyone, given that it really is pretty damned hard, you can play through great and lose on random encounters or make it to the end only to discover you don't have a chance in hell at beating the boss.  All this on top of no "reload from last save" option.  Butif you're able to strap your big-boy space pants, and learn to take your losses with your wins, there is a ton of fun to be had in this game.  

As an aside, FTL bears a striking resemblance to a board game known as Space Alert.  SA is one of my gaming group's all time favorites.  The game consists of 2-5 players cooperatively moving around a starship and activating consoles (shields, weapons, etc) to deal with threats trying to destroy the ship inside and out.  The game is also pretty brutal when you move beyond the easiest levels.  If you like this game, check out Space Alert if you can find a copy.  And if you like Space Alert, you're already 90% of the way to enjoying FTL.


/endofline

22 September, 2012

A quick primer on statistics, pt 2. Inferential Stats and Simulation

Last time I talked about statistics, I limited my discussion to the statistics used to describe the distribution of results from random processes.  These methods are the fundamental parts that can be assembled to understand the stat methods used to understand unknown parameters, and differences between unknown parameters.

What follows below is a whirlwind tour of what is essentially at least a quarter long class in upper division undergraduate statistics.  Again, Wikipedia and Khan Academy are great resources to learn more.

Inferential Statistics

Inferential statistics describes the set of methods used to estimate the unknown parameters of random events.  The most common of these methods rely on observed data to produce estimates of these unknown parameter values.

A classic example of the use of inferential statistics is to estimate the probability of an unfair coin, i.e. a coin that may not come up heads as frequently as it comes up tails when flipped in the air.  Another similar example (more applicable) would be to calculate the probability that an Edge of the Empire Dice pool would produce more successes than failures.

Presume we have no reliable way to calculate how often this coin will come up heads based on its physical qualities, and need an alternate method method to estimate this probability.  Essentially, we have the following situation, expressed in the notation I explained before:

Pr(Heads) = p

But we do not know the value of p, beyond the fact it lies between 0 (it never comes up heads, the probability of heads is 0%) and 1 (it always comes up heads, the probability of heads 100%).

Now that we've identified the problem, and what we're trying to find (the value of p), we will make some assumptions to VASTLY simplify our problem:
  1. p is constant during the experiment, i.e. p does not change value between flips.
  2. The results of each flip are independent, i.e. the results of one flip do not affect the results of any other flip.
  3. The only possible outcomes for each flip is heads or tails
  4. Every flip produces a valid outcome (either heads or tails).
  5. The variable X (the total number of heads in a set of n trials) has a binomial distribution, with parameters p and n.
The first two are basic, and I'm not going to discuss them with any depth, but in statistics we call this iid.  The third and fourth assumptions allow us to make the fifth, which states that we will assume that X conforms to the binomial distribution.  This is very commonly used distribution when we want to calculate the probability of an event.  Technically, the number of heads produced from a number of flips, X, is what is truly binomially distributed (as stated above), not the probability, p, but I'll reconcile this in a moment.

With the distribution defined, we have a paradigm to work within, and useful defined equations to produce parameter estimates.  The distribution has two parameters: the number of trials (in this case, each flip is a 'trial') and the probability of the trial being a success (in this case, success is the flip coming up heads).  Note that it is this second parameter is exactly what we are interested in estimating: Pr(Heads) = p. We also have control over the number of trials we perform, n.  So, it can be shown that the estimated expected value of X is equivalent to the number of successes divided by the number of trials performed.  Essentially: 

X/n = E(p) =  Pr(Heads)

This formula represents 2 different concepts:
  1. X/n is the proportion of trials in our experiment that came up heads.
  2. X/n is the probability of a single trial in our experiment to coming up heads, Pr(Heads).
These two concepts are equivalent: The proportion of successes on all trials may be interpreted as the probability of success on a single roll.  This is will important below.

All we need now, is the data.  To generate the data, we have to perform an experiment, in which we can simply take the coin, and flip it 20 times.  or 100 times. Or 100,000 times.  But let's start out small, with 20 coin flips, and we'll say this produced 7 heads.  Now we can calculate X:

7 heads total/20 trials = .35 = Pr(Heads)

This is essentially the scenic route to do exactly what you would have done anyway to figure this out.  But by giving the justification and walking through these steps we have a simple example that shows our route to get what we need: an estimate, or inference of the value of a previously unknown and incalculable parameter in a situation where we don't fully understand the underlying mechanism that produces the results.

Point Estimates and Sample Sizes

The value reported above, p = 0.35, is a point estimate of the probability of heads.  Point estimates are a measure of centrality, and indicate the most likely value of the parameter, given the data.  If you look back to the previous post, you should also be reminded of the difference between an estimate and a parameter, and see that this is an estimate.  Now, if we repeated the experiment (flip a coin 20 times, count the total number of heads), we may get different point estimates.  This further shows that the result is not necessarily the parameter value.  

If we wanted to be more confident about our estimate, we could increase our sample size by increasing the number of times we flip the coin.  Many curious minds may ask "why does increasing our sample size increase our confidence about the estimate?" which is a great question.  The details are beyond the scope of this discussion, so I'll simply invoke the Weak Law of Large Numbers, which states "as the sample number increases, the observed mean converges on the actual expected value".  So larger samples tend to produce more reliable (but not necessarily perfect) estimates of parameters.

Simulation or: "How I Learned to Stop Worrying and Love The RNG"

So, we have shown how to estimate a parameter based on observed data from experiments we have performed when we do not understand the underlying mechanism that produces the result.  Now, let's we re-examine our 1d6 example from yesterday.  Let's say we wanted to find the probability of rolling a 5 or 6 on any roll.  In this case, we  do understand the underlying distribution that produces the results: There is a 1/6 chance of producing, respectively, a 1, 2, 3, 4, 5, or 6 on any roll of the die.   We could use our knowledge of expected values to find the parameter value (in this case it would be 1/3), but that would be boring!

Instead, we use what we just learned about binomial distribution and inferential statistics to perform an experiment.  We roll 1d6, physically, 20 times, and get 8 rolls that came up a 5 or a 6.  Based on what we did above, this would lead us to estimate that there is a 8/20 = .4 chance that we roll a 5 or 6 on a die.  [Note that it would be impossible to calculate the REAL probability of 1/3 based on this experiment].

Now if we were to desire a more reliable estimate, we could continue to roll the die many more times, recording each result and calculating the overall proportion of trials that produced 5s or 6s, which we can interpret as the probability as any roll coming up a 5 or a 6.  However, this method becomes rather tedious, and we have other tools at our disposal to automate this process.

With some code, we can create a program that will randomly select a value from the set: {1, 2, 3, 4, 5, 6}, each with 1/6 probability, which is exactly the distribution we are sampling from, and calculate the proportion of results that are 5's or 6's, which we have established can be interpreted as probability.  This is known as Monte Carlo sampling, and relies on the computer's (pseudo)random number generator to randomly sample from known distributions to estimate parameter values.  By invoking the weak law of large numbers, the results of such a simulation should produce parameter estimates that converge to the actual expected values.  This requires no explicit calculation of expected values, which can become very complex in some situations, and much larger sample sizes can be produced in much less time than similar experiments.  It simply requires that we have a very good understanding the underlying distributions.

Technically, the computer is unable to produce truly random numbers, but today's pseudo-random number generators are so good anymore,there is practically no difference.


Confidence Intervals, Hypothesis Testing, and Simulation

Typically, the purpose of invoking inferential statistical method is to estimate parameters that are unknown and cannot be calculated or to estimate the difference between two or more parameters.  The former of these is typically done by calculating confidence intervals (CI's) from observed data and the latter done using hypothesis testing.  Really these are two sides of the same coin.  What you need to know is, as the sample size, n, increases, the confidence intervals become narrower (to represent that the parameter estimates are more reliable) and observed parameter estimate differences are more likely to be different, because larger sample sizes are more reliable to detect smaller differences.

The term "p-value" comes into play at this point, and is frequently recognized and frequently poorly understood concept, even by professionals that use statistics on a daily basis.  For the purposes of this discussion, people passingly familiar with this concept need to understand that everything I say about hypothesis testing bears true for p-value as well.

Back to the point! Which is: our ability to hypothesis test for a difference in estimates is dependent, at least in part, on our sample size.  This means that as we are able to make our sample in simulations arbitrarily large, CIs and hypothesis testing becomes fucking useless.  Further, the CIs are used to describe the uncertainty and distribution of the mean of the distribution, and not the distribution itself.

Enter the Probability Interval

Probability intervals are a concept I was first introduced to in studying Bayesian statistics (seriously, don't worry about it), and are similar to bayesian credibility intervals.  They are typically defined by the narrowest interval that contains XX% of the observations from the entire distribution of observations.  They are derived from the raw observed (or simulated) data, and briefly describe the entire data set, not just the mean (as CI's do).  They become more reliable as sample sizes increase, but do not become substantially more narrow as sample size increases.  This makes them ideal for discussing and reporting simulation data.  

Some things to remember about PI's:

  • PI's are not centered on the mean, since mean is not used to calculate them in any way
  • PI's are not symmetric around the median or the mode, since distributions may be asymmetric.
  • PI's do not rely on or assume an underlying distribution (CI's rely on the normal)
  • PI's may be reported with different %'s, e.g. 90% PI means the PI covers 90% of the observations, and a 95% PI would cover 95% of the observations.
  • PI's are only a synopsis, there is information lost when ONLY a PI is reported.  Full histograms are usually necessary to fully visualize a distribution.
Alright... Thats enough for now.  With all the tools I need at least mentioned, even though nobody really cares, I can start talking about what I really want to talk about:

The probability implications of the Edge of the Empire dice system... FINALLY!!!

/endofline

EDIT: Sorry for the delay on this post.  It was sitting at around 90% finished most of the week, but I fell into EotE forum discussions and FTL... Which is AWESOME!!! TRY IT!!!  BUY IT!!!

13 September, 2012

A quick primer on statistics, pt 1. Descriptive Stats

I will be using statistics in my future posts to, support, explain, or justify opinions that I hold in regards to game design.  And, to that end, I'd like to have a previous post that I can refer readers to that explains some of the methods and terminology that I use.  

As a warning, I am oversimplifying many of these concepts because this is not intended to be an complete course on inferential or descriptive statistics, just a primer to familiarize somewhat educated individuals with the definitions of the terms that I must use to communicate these concepts.  If you see some egregious error, please let me know, but If you do know enough to see where I gloss over some concept nuances, great, but please don't correct every little omission or detail, its not gonna help anything.

If you want more information on any of these concepts, I strongly suggest you check wikipedia or spend some time on Khan Academy.

This is as good as any place to address my qualifications.  As an undergraduate, I took a single statistics course, and as a veterinarian, I received very little statistical training.  However, now, I am a PhD candidate in Epidemiology at a major public California university.  While my pursued degree is in epidemiology (the study of outbreaks and disease in populations), my focus is statistics and biostatistics.  I have over 60 credit hours of core and elective statistical training, and I have been teaching assistant for more than 50 hours of statistical courses.  My dissertation involves substantial amounts of simulation modeling and programming in R (an open source statistical package).

Anyway, lets get started.

Probability vs Odds

I'll use probability almost exclusively when discussing outcome.  Odds are not frequently reported (at least in North America, I'm told its more common in the UK).  I find odds to be less intuitive than probability, and probability to be easier to work with mathematically than odds.  It is easy to calculate the odds from probability, and the probability from odds.

The probability of an event occurring is denoted by "Pr(Event)".  For example, the probability of a 6-sided die (a "d6") coming up 3 is written "Pr(3)".  Since I can't place a bar over the word, which denotes the event not occurring, so I'm going to try representing the event not occurring as "Pr(Event)" or "Pr(Not Event)".  For example, I will write the probability of a d6 coming as not odd as "Pr(Odd)".  

A slightly more complicated version of  probability is conditional probability, which refers to the probability of an event under certain conditions.  For example, the conditional probability of a 6-sided die coming up 1 when the result is odd is 1/3.  This is represented by the notation "Pr(3|Odd)", which is read "The probability of the result being 3 given the result is odd."

Statistics & Parameters

These terms are largely synonymous, and I will probably use them interchangeably.  Technically speaking, statistics are estimated from, and used to briefly describe, data.   Parameters are the REAL values that we typically attempting to estimate, with statistics. The difference is that the true values of parameters are typically unknown, and probably unknowable, but can be estimated using statistics.  

While this seems like a really esoteric and meaningless difference, there is a reason I need to mention it.  When using various methods to estimate parameters, the results are just that, estimates.  When I state that a value is a parameter estimate, I am not claiming to this value to be the exact value of the estimated parameter, there may be some sources of error.  However, any estimates provided will be as accurate as possible.  Accuracy means that the point estimate (see below) is as near as possible to actual parameter value, and any interval (see further below) around the point estimate is as narrow as possible.

Distributions

A distribution is the combination of all of the possible outcomes of a random process, and the probability of each of the individual outcomes.  For example, the distribution of results of a 6 sided die would be: 

  • Pr(1) = 1/6
  • Pr(2) = 1/6
  • Pr(3) = 1/6
  • Pr(4) = 1/6
  • Pr(5) = 1/6
  • Pr(6) = 1/6

While this is the most informative way of presenting the results, its clear that for even slightly more complicated distributions, and joint distributions of 2 or more results, this method becomes far too cumbersome to report and we use other synoptic values to describe the distribution.

There are a large number of distributions that have described and named and studied and labeled.  The normal (aka Gaussian) distribution is probably the most well known of these distributions, but is very limited use in these situations.  It imposes a number of assumptions that may not be met in data I simulate, and it will be rarely invoked in this blog.  I will more frequently be using the binomial distribution, which describes the number of successes in a number of trials repeated under the same conditions.
Measure of Centrality

This phrase refers to statistics that describe the center point or most common results of distribution.  The arithmetic mean is the statistic most people are familiar with.  While this is valuable, it also frequently biased (lies away from the true center point of the data) and can poorly represent the actual information in the data.  The median is the value where 1/2 of the distribution lies above this value, and 1/2 lies below.  It is useful in some circumstances, especially in heavily skewed distributions with many outliers on one side.  In the discipline of statistics, however, we frequently use the expected value of a distribution as the real measure of centrality, and is abbreviated "E(X)" for the "Expected value of X".  The expected value of discrete distributions (basically ALL of the distributions will discuss as result of dice rolls) are calculated by by adding the value of the event result by the probability of the event occurring.  So, E(d6) would be calculated:

1/6(1) + 1/6(2) + 1/6(3) + 1/6(4) + 1/6(5) + 1/6(6) = 1/6 + 2/6 +...+ 6/6 = 21/6 = 3.5.

The arithmetic mean over a very very large number of rolls is identical to the expected value (Due to weak law of large numbers).  

Measure of Dispersion

The measures of dispersion describe how far away individual observations fall from the central point.  It may be thought of in a VERY limited sense as how "random" individual observations in this distribution tend to be.  Higher values indicate that individual observations tend to fall further away from the middle, than towards the middle.  The parameters variance and standard deviation, abbreviated Var and StDev, are commonly used measures of dispersion.  I'm not going to provide equations 

Measure of Dependence

Measures of dependence describe how related individual results from two distributions  are to each other.  The two most common measures of dependence, BY FAR, are correlation and covariance, which assume a linear relationship between the two distributions. The correlation of X and Y (there must ALWAYS be 2 outcomes for these) is denoted as
  • Cor(X,Y)
  • rX,Y
And covariance is denoted Cov(X,Y).

Its interesting to point out is that if rX,Y =/= 0, there ample evidence that X is not independent of Y, BUT if r = 0, there is NOT ample evidence to state X and Y are independent.  

Okay, that's the end of part one; This became a LOT longer than I anticipated, and I want to get SOMETHING posted.  Next time I'll discuss statistical inference and simulation, which will build on what's posted here to support the methods I use when evaluating game mechanics.

Any questions? Post 'em below.  Thanks for reading.

/endofline

06 September, 2012

Understanding the Edge of the Empire attribute sytem

Many contemporary games (at least those I'm familiar with, Wizards products, Alternity, Warhammer Fantasy, West End's d6 Star Wars,etc) use a set of 6 numbers to represent the inherent , baseline physical, social, and mental characteristics of an individual.  I don't know why 6 has been chosen so frequently; whether its tradition, the minimum number needed to get the appropriate fidelity, its just convenient, whatever, it always seems to be six.  Unofficially, these attributes are typically subdivided into physical 'hard' attributes and mental and social 'soft' attributes.

FFG's new Star Wars RPG, Edge of the Empire, is no different.  Characters in this system have the following characteristics:
  • Brawn
  • Agility
  • Intelligence
  • Cunning
  • Willpower
  • Presence
At first glance, my friends and I thought these seemed similar to WotC's classic Str, Dex, Con, Int, Wis, Cha, which we're all very familiar with from d20 games.  Constitution was combined with strength, and cunning was... something between dexterity and int? Alright, no big deal, they found constitution to be redundant (many people do) and added cunning to get to that nice happy number 6.  Cunning sounded good, right? Thats what Han Solo was crazy good at, that devilish scoundrel.  Basically, 2 hard attributes, and 4 soft.  We can work with this, everything else just translates, right?

Thing is, as we got deeper into the system, it didn't feel right... Skills were paired strange attributes, like skulduggery (Thievery, slight of hand, etc) was cunning, instead of agi, perception was also paired with cunning.  Vigilance, a willpower skill, can be used to see if I remember equipment?  Coerce is a willpower skill? And the two initiative skills (wait, 2 initiative skills?), cool and vigilance, were paired with soft attributes, presence and willpower, respectively; They had previously been (almost) exclusively the realm of hard stats, dexterity in particular.

As I started reading and thinking about how this had been designed, I think I figured how how to view this new set of attributes.  Instead of splitting them as "hard" and "soft", it helps to better understand them if you split them into 3 groups, representing different kinds of resources:

  • Physical, contains brawn and agility
  • Mental, contains intellect and cunning
  • Social, contains willpower and presence
The physical group involves tangible resources, e.g. weapons, the character's body, rocks, etc.  The second involves information resources, e.g. book knowledge, habits of criminals, etc. in addition to perception abilities.  The last group is a bit harder to define, largely because of bleed between stats in previous systems.  Social resources include not only other individuals and a character's ability to interact with them, but also a character's sense of self, identity, and beliefs.

The first aspect listed in each of these groups (brawn, intellect, and willpower) reflect the character's physical, mental, and social resources, respectively.  For example, brawn represents physical potential and ability to resist physical forces, and willpower represents an individuals social resources, a strong sense of self, and ability to resist social manipulation.  The use of these attributes, and the skills associated with them, represent the "blunt force" application of these resources.  These attributes and skills tend to be more reactive in their applications.

The second aspect of these groups (agility, cunning, and presence) represent the character's ability to manipulate physical, mental, and social resources. The resources being manipulated may be either internal (e.g. knowledge) or external (e.g. personnel, or a weapon) to the character. The use of these attributes, and their skills, represent the "fine manipulation" of these resources.  These attributes and skills also tend to be more proactive in their applications.

This extension makes understanding how skills are paired with attributes make substantially more sense.  And there are notable expceptions (e.g. active and passive perception are handled by intelligence and cunning, respectively).  

While there are an unlimited number of changes when comparing systems, there are two I'd like to draw attention to for those familiar with the Saga edition (or other d20 systems).  I think both the changes are good, as they both pull some power away from the agility attribute, which seems be historically more important than other attributes.  In the current system, there does not appear to be any outstanding "super stats", but that may change as beta contiunes and we learn the system better.

Initiative

With one exception, initiative in WotC and TSR systems has been the domain of agility/dexterity, representing an individuals quick nerves and reaction time.  It is now governed by two social skills, cool and vigilance.  I think FFG's reasoning here is that combat, albeit adversarial, is still a social interaction, involving a number of individuals. The ability to use social skills (reading faces, anticipating actions) is important to secure a favorable slot in the action queue.  By un-linking initiative from a physical characteristic, It can also explain the flexibility of initiative order.

Manual Dexterity vs a Devious Mind

While agility retains the skills for piloting, ranged combat, stealth, and acrobatic ability (and as I write this I go back to wondering if agility isn't still a little too useful...), Cunning is now the skill used for skullduggery, the skill governing the picking of locks and pockets, in addition to figuring out how to break into secure locations.  This is no longer in agility (as it is under d20 systems) because the dev's are acknowledging the need for a quick mind in addition to quick fingers to accomplish these tasks.

I hope that this helped and the system makes a little more sense... Next I'm thinking I'll talk about dice mechanics of the system.

/endofline

[EDIT: changed the name of the Manual dexterity" section.  On my first read-through I thought they had included "fine motor skills" in Cunning, but was mistaken, Dexterity and Agility are still lumped in one stat].

Edge of the Empire RPG

Wow, its been forever since I posted up here.  But, I suppose I am persistent, if not consistent.  Ends up that, once again, I have a topic I have something to say about.

I was fortunate enough to be sitting in the Fantasy Flight Games 2012 In-flight report at GenCon this year when the new Star Wars RPG (Part one), Edge of the Empire, was announced.  For those of you who have been lucky enough to avoid hearing me talk about it, this is the first of three planned releases of "seemlessly interchangable" standalone Star Wars RPG products:
  • Edge of the Empire (due out early 2013), focusing on fringe campaigns
  • Age of Rebelion (due out early 2014), focusing on military (Rebel v Imperial) campaigns
  • Destiny of the Force (due out early 2015), focusing on force use campaigns
Edge of the Empire Beta started in the presentation, and, amazingly, everyone in the presentation all got Oprah'd:

EVERYBODY GETS A FREE COOOPPAAAAAAAAAAAAY!!!!

For everyone else, copies were available for $30, but unfortunately, I've heard since that FFG is sold out of them, and are now only available from local retailer.  Only 5000 were published.

The system uses specialized narrative dice system adapted from FFG's Warhammer Fantasy RPG system.  Stickers were included with the book, and dice rolling app is available for $5 on iOS and Android.  There are classes, but no levels; character advancement is achieved by spending earned experience points to purchase talents & skills or access to new specializations (new word for "class").  Characters creation in a nutshell: Characters are built by selecting race, career (groups of three specializations), specialization w/in that career, and are then given a pool of XP points to buy their initial toys.  There are other systems ancillary to these basics, but are not vital to understanding how the system works.  

There is a novel  6 attribute system.  There are 23 general skills, 6 knowledge skills, and 5 combat skills; each skill is associated with a prime attribute.  There are 6 careers (Bounty Hunter, Hired Gun, Explorer, Colonist, Technician, and Smuggler).  Each career has 3 specializaions.  Each career and specialization has a set of associated "class" skills, and each specialization has a 5 tiered talent tree with a talents that provide a wide variety of bonuses, e.g. increased damage resistance, additional boost dice, ability to ignore some penalties, new exclusive actions, and novel ways to spend destiny points.

Some characters from previous d20 editions will translate with almost no effort.  Personally, I've found that the scout (which was a bit of a catch all in Saga editions) translates poorly in this system, and these characters will probably see a substantial 're-imagining' in their core concept.  But that's cool, continuity reboots are 'in' these days.  If you had previous played (or are currently still playing) the d6 systems, translation should be pretty easy.  

I'm very pleased with the product.  It does a great job of capturing the feel of Star Wars, and sticks to the Rise of the Empire, Dark Times and Rebellion eras.  The system is flexible and provides enough examples to give a GM enough tools to build what he needs to bring what exists in his imagination (and the imaginations of his players) to paper.  Really, the game feels much closer to West End's d6 system than WotC's more contemporary d20 versions, and thats a good thing.  The rules are more flexible and tone is darker.

And the dark side.  There are some issues with the game and the resolution system, but its in beta.  The purpose of beta is to tweak what can be tweaked to improve the game, so hopefully the devs will be responsive to feed back.

Get settled in... I'll probably be posting impressions and info about the game for the foreseeable future.  I'm also posting on FFG forums as "LethalDose" if you feel the need to troll me more publicly.

/endofline