Advancing the Social Contract for Gig Economy Workers

This post relates to a conference that I attended in Washington DC, USA in February 2017.

GigEconomyVoice

The Focus

On February 28, 2017, key policy makers, business leaders, thought leaders and industry associations came together to discuss the need for a new social contract for gig economy workers. This site is intended to share the conversation from the labor policy forum, continue the dialog and turn that dialog into action.

Panel Two

This panel will explore how we can consolidate an amplify the voice of the gig economy workforce through association, as well as through other channels, such as traditional/social media.

Moderator: Karen Dynan: Former Assistant Secretary for Economic Policy, US Department of the Treasurey Affliate; Harvard University Department of Economics, Harvard University

Panelists:

  • Sara Horowitz: Founder and Executive Director, Freelancers' Union
  • Althea Erickson: Senior Director, Global Advocacy and Policy, Etsy
  • James Collings: Chairman, IPSE
  • Ike Brannon: President, Capital Policy Analytics

Posted in Blog Posts, Freelancing | Tagged , , | Leave a comment

Y'all ready for this?

Capacity Planning is all about trying to predict what the impact of future events will be on the current systems, and making sure that there are no bad outcomes.

 

Over the past few months, we have seen what may well become the most momentous decision for the UK take place… the Brexit Referendum.  My work over the past year or so, has been to assess each of my client’s 200+ applications used for trading Equities and Derivatives and work out whether they could cope with the increased trading activities brought on by the referendum, or whether upgrades would be required.

 

But what exactly would be the increase?

 

This is often the challenge of Capacity Planning. How can you enumerate the impact of something that has never happened before?

 

Scenarios:

You could take the approach of modelling the impact of an increase of 50%, 100%, 150%, 200% of the current activity.  This allows you to identify which of the applications would break “first”.  You may find that there are a handful of applications that can only cope with an increase of 50% volumes, whereas the majority can cope with a 200% increase.

Application 50% increase 100% increase 150% increase 200% increase
A Y Y Y Y
B Y Y N N
C N N N N
D Y Y Y N

 

This gives you an immediate “hit-list” of applications to which you should pay attention for potential upgrades.  But that is only if you think the impact of the future event (Brexit vote, in this case) will result in a increase of more than 50% on trading volumes.

 

BreakPoints:

An alternative approach, is to turn the whole modelling on its head.  Rather than finding out whether a particular application will break when there is an increase in volumes of a fixed amount, one looks at the maximum capability of each application and report this in terms of the current workload.

Application Maximum Increase
A 230%
B 117%
C 15%
D 168%

 

This view shows the weak-points in the environment, and is a more detailed assessment than that provided by the “scenario” approach above.

 

Yes, but, WHAT EXACTLY would the increase be?

 

Ah yes.  This is still the $60m question.  Obviously, no-one can know for certain what effect an unprecedented event will have… because it is unprecedented (the clue is in the name!).  But we do have a couple of approaches open to us.  We can find a “similar” event from the recent past and see what impact that had.  In this case, the General Election of 2015.  We can take that as a yard-stick and see what the delta in business volumes back then (ie: an increase of 150%) would be to the current business volumes (ie: still an increase of 150%, but on a higher baseline of values).

 

The other alternative, and always one to make the experienced Capacity Planner smile, is to actually ask the Business.  After all, who knows the business of trading Equities and Derivatives better than the traders themselves?  The reason this makes me (and others) smile, is that quite often the people closest to the sharp-end of the business, don’t actually know any more than you do about what the impact of a future event will be.  Even when that “future event” is wholly within a business’s control, like a planned business strategy of selling twice as many widgets as they did in the previous year.  Often the sales won’t come through… or the sales people will be super-successful and sell more than 2x.  They aren’t going to STOP selling when they hit 2x (as per the strategic plan), they are going to keep on selling more and more!

 

So.. I hear you ask…. What approach did *I* take and did it work?

 

The joint decision was to go with a 100% uplift on previous peak volumes for all systems.  I say JOINT decision, because while it was the proposal that I put to the business, I always like business assumptions to be “owned” by business.  As the referendum results came through on that Friday morning, I was monitoring the systems throughout.  The exit poll at 10pm told us that the result was very close, and that actually helped.  For months the pollsters had been predicting a close result, so very few people had taken a completely IN or OUT position with their trades.  There was no panic buying or selling as people looked to move out of their extreme positions, instead there was a steady trickle of trades as people consolidated what they had.  The eventual uptick in trades was only around 30% at peak… something easily manageable.

 

So was this all a monumental waste of time?  Is Capacity Planning just a futile activity with no beneficial impact to the business…. I would say NO… but to expand on that statement is the subject of a future blog.

Posted in Blog Posts, Capacity Management | Tagged , , , , , , , , | Leave a comment

It ain't what you do

It ain’t what you do (its the way that you do it)

 

Clouds…. Puffy things… stretching as far as the eye can see… limitless.

White, wispy clouds that brings thoughts of hazy summer days?

Or dark, foreboding, angry clouds that warn of heavy rain or snow?

 

That’s the thing about Clouds.  They can be all things to all men, and what sort of clouds they are will change what you think about them completely.

 

The IT cloud is no different.  Depending upon how your cloud has been implemented will completely change what value it brings to your organisation, how you can get the best out of it, and for the Capacity Manager… how you need to manage it.

 

I’ve blogged before about the issues related to the sharing of resources within a cloud.  If two clients are promised the same CPU resource, then you better be certain that they don’t both want to make use of it at exactly the same time, otherwise you’re going to run in to the same contention for resources that Capacity Managers have been dealing with for ages.  The Cloud isn’t helping or introducing anything new to the equation.

 

But recently, I have been involved in the Capacity Management of a private cloud in which the client has decided that there would be NO sharing of resources!

This is a new concept to me.

 

Effectively, a bunch of blades are combined into a single Vmware cluster, and that cluster then hosts a multitude of Guest OSs (Linux, Windows, etc).  But whereas in a normal Cloud architecture you would oversell the CPU and Memory by a factor of X, in this private cloud the memory was undersold (on purpose) by a factor of 0.9.

 

This meant that for every 512Gb of memory that was installed on the physical blade, only 460Gb would be available for client use.  CPU was still being oversold at a ratio of 2.5x, on a per core count, which on these blades was 80cores * 2.5 = 200 vCores.

 

The challenge for the Capacity Manager changes from the usual assessment of utilisation approaching 100% of the available resources, into an assessment of when allocation will reach 100% of the available resources.  In this case, checking whether the allocation of cpu & memory to guests would reach the limits of 200 vCores or 460Gb vMemory first.

 

Consider the following list of guest demand (arriving in this order)

Guest Name vCores (running total) vMemory (running total)
A 8 (8) 32 (32)
B (powered down) 16 (24) 128 (160)
C 2 (26) 8 (168)
D 4 (30) 16 (184)
E (powered down) 8 (38) 32 (216)
F 32 (70) 128 (344)
G 16 (86) 128 (472)
H 4 (90) 32 (376)
I 8 (98) 64 (440)
J 4 (102) 32 (472)
K 8 (110) 16 (456)

 

As you can see from the table above; although the total vCore demand is only 110 cores (far below the 200vCore capacity limit), there is too much vMemory demand and two of the guests (G and J) cannot be accommodated because the accumulated memory requirement is above the 460Gb that is available.  Even if two of the guests are powered down (B & E) the Capacity Manager must assess the impact onto cluster for when they are powered up, and therefore their vMemory allocation is considered to be active.  Their combined 160Gb of memory cannot be allocated to anyone else... just in case they are powered up and need it.

 

In Utilisation based Capacity Management, it would be normal to consider the fact that these two guests are powered down, and also to look at the utilisation of all other guests.  For example, guest F that has an allocation of 128Gb vMemory might only use 50% of this allocation at its peak.  In this case the unused 64Gb of vMemory would be available to allocation to other guests and thereby we would be able to make fuller use of the assets.

 

In Allocation based Capacity Management, the policy of the business not to over-allocate resources means that we must ignore this potentially available resource, and effectively plan on a “worst-case scenario”.

 

So you see…. It ain’t what you do (converting from stand-alone infrastructure to a cloud based solution) it’s the way that you do it.  Because doing it with inefficient policies will not deliver the full benefits that you might have hoped for.

 

Posted in Blog Posts, Capacity Management | Tagged , , , , , , | Leave a comment