$5B for Wall is Chump Change Compared to Torpedoes in Productivity

Think what you will about a wall, a fence, or whatever you might call it. I’m an American citizen, and I’ve just finished my dinner of wonderful Mexican tortas ahogadas while south of that line on a map.

The shutdown’s fallout goes far beyond a wall or $5 billion dollars to pay for it.

Spending by the US federal government hovers around $4 trillion dollars each year.  (Check out projections by the Congressional Budget Office here.)

With federal spending estimated at $4.7 trillion for 2019, $5 billion is chump change at about 0.1%.

During a shutdown, federal employees like the somewhat disgruntled yet dutiful TSA workers I met on my way to Mexico earlier today suffer demotivating lack of pay. Surely some of these workers will, or already are, finding non-government employment.

In the competitive labor market we live in, the best and most capable government employees are going to look elsewhere. Meanwhile, I doubt ‘essential’ workers are operating at any semblance of top productivity given uncertainty and growing resentment. Not to mention … don’t we have a constitutional amendment outlawing this? Oh it’s a confusing world.

So, doing some back-of-the-envelope math, the government has stewarded over more or less $420 billion in outlays during 33 days while firing on less than all cylinders.

Meanwhile, the government is almost surely going to issue backpay at some point. So, everybody’s going to get paid pretty much the same amount for working at diminished productivity or not at all.

Similarly, employee retention will likely take a hit. If so, we’ll see money diverted away from accomplishing the business of government back into hiring, training and the necessary acclimation time it takes new workers to learn and become adept at their roles. Retention issues mean there will be gaps in which some tasks won’t get done efficiently, correctly or on time. Work will be duplicated or redone as well.

Taxpayers, is this really how you want the spending of your money managed?

Now, we see all kinds of posturing on both sides. Trump is afraid of backlash if he breaks a campaign promise — not that I think the good folks here in Mexico are in any hurry to pay for his wall.

Meanwhile, opposition leadership seems all too content to let the games go on as we lose untold billions in lost productivity and operational inefficiencies. When better border security becomes cheaper than the shutdown’s productivity losses, the debate’s premises do change. Is better border security wrong when it’s cheaper, or is there too much moral hazard in such a rationalization?

Wouldn’t it make sense for Democrats to just give Trump his wall and stop wasting money? Wouldn’t it make sense for Trump to say, “let’s table this a while”, so that the business of government can go on with less interruption?

To add to the fog, we see Trump pushing for ’emergency action’ that might further weaken separation of powers in America.

Both sides find themselves too committed. Too much political capital is at stake. Both sides look a bit like kindergarteners, and the American people are losing.

Math Pierces Steve Jobs’ Reality Distortion Field After 35 Years

In 2011, a hero to so many of us including myself was in the last stages of pancreatic cancer.  Steve Jobs, an icon of the PC revolution, and later what should rightfully be called “the mobile revolution”, was working hand in hand with author Walter Isaacson. The pair were completing the last biography he would take part in while alive.

The book, simply called Steve Jobs, was published 19 days after Jobs’ death and quickly rose to bestseller status. The book contains many memorable tales about the way he motivated others. In one often referenced experience, we read about Jobs driving home the point that making computers faster can save entire human lifetimes.

Now, 35 years since related by Steve Jobs in August 1983, we can plainly see that the math behind his dramatic example was wrong … but maybe that’s okay.

Jobs was not an engineer. He was also not the original inventor of many ideas which made Apple’s products a game changer. Like many successful visionaries, his value to the market — and to the world — was his ability to help others connect the dots.

In leading Apple, he created narratives that captured imaginations and provided motivation. His famous ‘reality distortion field’ worked because he showed his employees things they wanted to believe. He packaged goals as a better reality than the one his audience existed in moments before.

People forgot about preconceived limits. As perceptions and desires outstripped current reality, his engineers and product designers were doggedly motivated to close the gap —  by improving actual reality. 

In the final biography, Isaacson relates the particular moment which inspired many including the team at my company Pocketmath. Jobs had always championed the quality of the user’s experience, and one day he set his sights on the onerously slow boot time common to computers of the era.

According to Isaacson, the Apple CEO explained to engineer Larry Kenyon what dramatic impact 10 seconds of boot time savings would have:

“Jobs went to a whiteboard and showed that if there were five million people using the Mac, and it took ten seconds extra to turn it on every day, that added up to three hundred million or so hours per year … [sic] … equivalent of at least one hundred lifetimes saved per year.”

The story sounds great, and we have no reason to doubt Jobs really delivered such a message. Furthermore, it likely did have a profound effect. Isaacson later writes that Kenyon produced boot times a full 28 seconds faster.

However, the math doesn’t add up. Based on the assumptions outlined, it saves far less than 100 lifetimes.

In the 2005 book Revolution in the Valley: The Insanely Great Story of How the Mac Was Made, author Andy Hertzfeld quotes Steve Jobs somewhat differently:

“Well, let’s say you can shave 10 seconds off the boot time. Multiply that by 5 million users and that’s 50 million seconds every single day. Over a year, that’s probably dozens of lifetimes. Just think about it. If you could make it boot 10 seconds faster, you’ll save a dozen lives. That’s really worth it, don’t you think?”

Let’s walk through the math.

Assume, just as Jobs did, that there would be 5 million people using the Mac. Likewise, assume the time savings is 10 seconds each day per person. Each day, 50 million seconds are saved. In a year, that translates to a savings of 50 million multiplied by 365 days which is 18.25 billion seconds.

Let’s translate that savings into hours. There are 60 seconds in a minute and 60 minutes in an hour.  So, there are 60 times 60 which is 3,600 seconds in an hour. To obtain the number of hours saved, divide 18.25 billion seconds by 3,600 seconds.  The result is 5,069,444 hours of savings.


Now, let’s compute a typical life span of 75 years in hours.  We calculate 365 days per year times 24 hours per day times 75 years.  The result is 657,000 hours.


So, how many lifetimes have we saved?  We divide the total savings 5,069,444 hours by our assumed typical lifespan of 657,000 hours.  We have saved nearly 8 lifetimes.


Of course, it’s unlikely Jobs was trying to make his point in anything but rough numbers, but his 100 lifetimes or even dozens is dramatically off — by about an order of magnitude.

Did Jobs goof on the math? Was he misquoted?

Maybe it doesn’t matter. Jobs made a point that some relatively small amount of engineering effort would save lifetimes of time, and he was right.

What’s more, the power of the lesson — even if flawed — paid bigger dividends. Apple not long ago became the world’s first trillion dollar company, and the company sold 3.7 million Mac computers in Q3 of this year alone. With many Macs remaining in service for several years, there are quite likely a lot more than 5 million people using a Mac at the very moment you read this.

The story itself has been quoted profusely since appearing in Hertzfeld and Isaacson’s works. Are we going to lambast the story as “fake news”, or are we going to say that everyone makes mistakes but the basic idea was right?  One thing is possibly true: if anybody checked the math, they didn’t talk about it.

For what it’s worth, I’ll hazard a guess where the original math went wrong. Suppose we convert 18.25 billion seconds saved into hours by dividing by 60 seconds in the minute and then another 60 minutes in the hour.  Now, suppose while scribbling on the whiteboard Jobs only divided by 60 one time. We get 304,166,167 — about 300 million, the number quoted in the book. That would be correct if were were talking about the number of minutes, not hours.

Yet again, we’ve seen the reality distortion field in full effect. During 13 years since publication, the story has been read by millions.  It has been enthusiastically retold without question in Isaacson’s biography, well known periodicals including the Harvard Business Review and throughout the web and social media.

It’s worthwhile to note that Apple’s operating system has changed drastically since the 1980’s. Apple’s OS X desktop operating system launched in 2001 replaced core components with those borrowed from BSD, an operating system of UNIX roots. As such, it’s likely the optimizations built in 1983 at Jobs’ urging have long been outmoded or removed entirely.

However, the operating system has continued to evolve upon its past and take inspiration from those stories. More broadly, Apple as a business would not be where it is today had it not had some degree of early success. Each product in the technology world builds on lessons and customer traction gained from previous versions.

One can argue not all technological progress benefits people, but I find it difficult to say there haven’t been some home runs for the better.

While reality may have been distorted longer than many would have dreamed, reality also caught up. As Apple made its reality catch up, its products have contributed lifetimes to the human experience.  And … maybe, just maybe … there was some “fake it ‘til you make it.”

A NAS in every household will help you and archaeologists. Do it now!

Our lives are digital.  Our cameras are no longer film.  Our notes are no longer postcards.  The USPS is having a hard time staying in business.

To get really deep about this … Thousands of years from now, archaelogists will see our world vividly just like on the day your iPhone or DSLR captured it. That is … if the data’s still around.

We’re losing data left and right because we aren’t practicing good ways of storing it.

Stop spreading your digital existence across 12 devices (including the ones long retired but never copied data from in the attic/garage/dumpster/Goodwill). Keep a definitive copy of everything in one place.

It’d be a shame cave paintings outlive our digital pictures, and right now that’s scarily possible.

If we could just centralize and manage it better, then maybe we could also have an easier time archiving it all.

So, let’s get practical!

First off … problems … how data was stored in the dark ages:

  • Cloud services.  They keep things accessible, can help centralize and they’re often inexpensive.  Cloud services miss the boat on your precious pictures and home movies because:
    • Your internet is too slow, and while Google et al are working on this, it’ll be a while yet.
    • Easy to user cloud storage providers are charging too much.
    • Inexpensive cloud storage providers are usually too hard to use.
  • The hard drive inside your computer can die at any time, and it’s probably not big enough.  Plus, it’s harder (not impossible) to share that stuff with say … your smart TV … and the rest of your family.
  • Portable/external hard drives.  Don’t get me started.  No.  I own far too many, and I have no clue what’s on most of them.  Plus 1/3 of them are broken — in some cases with precious photos or bits of source code lost forever.

Solution:  Get a Network Attached Storage device.  Today.  Without delay.

Why?  If you can centralize everything, it’s easier to back up.  You also have super fast access to it, and everybody in your home can share (or not — they do have access control features).

I have serious love for Synology‘s devices for three reasons:

  1. They integrate with Amazon’s Glacier service.  To me, this is a killer feature.  Now I can store every single one of my selfies, vacation pictures, inappropriate home movies, etc. in a very safe place until my credit card stops working.  At $10 per terabyte per month, that credit card should work a while.  Glacier is a good deal.
  2. It’s seriously awesome, fully featured software.
  3. Quality, fast hardware.

All at a price that while not the cheapest doesn’t particularly break the bank.

Now, I’ll assume that if you’re anything like me you want speed.  You want access to your data, or you’re not going to use that NAS like it’s supposed to be.

You’re also not going to invest in a 24 drive SSD enterprise SSD NAS because … well … you’re a home user.

So, some guidelines:

  • Buy at least twice as much storage as you think you need.  Your estimate is low.
  • Plan to upgrade/replace in 3 years.  You don’t have to make a perfect buying decision — nor do you have to buy for eternity.  Plan to MIGRATE! — which is why you’ll want hardware that’s fast enough you can copy data off it before the earth crashes into the sun!
  • Don’t plan to add more hard drives anytime soon.  Fill all the drive bays.
  • Buy the largest available drives.
  • Forget SSD.  SSD is too small and far too expensive for the storage you want.  Buy more drives and get performance advantages of having more drives instead.
  • Plan on backing up every computer you own to the NAS — size appropriately — and then some.

My Picks

With price and performance in mind, I’ll wade through the mess of models Synology has to tell you what makes sense in my opinion:

Recommendation 1:  Synology DS414

  • Four drives provide 16TB physical space — 10-12TB usable with Synology’s own RAID.
  • Four drives provide better read performance than two or one
  • Spare fan just in case one fails
  • Link aggregation, but you’ll never use it.

Recommendation 2:  Synology DS214+

  • Fastest Synology two drive model.
  • Two drive redundancy.
  • For some users, the video playback features of the DS214play may be more appropriate, but it’s slower and more expensive.

Recommendation 3:  Synology DS114

  • Danger!  Just one drive — no redundancy.  You are backing up with Glacier, right?
  • Fast for a single drive NAS

All provide:

  • USB 3.0 port(s) to load your data from a portable drive
  • Gigabit ethernet
  • All that lovely Synology software!

Hard drives?

Personally, I’d buy the Western Digital Red 5400RPM NAS drives in 4TB.  Based on Amazon’s pricing, I don’t see much of a premium if any for getting the largest model on the market.  The larger the drives, the more benefit you get from your NAS, so I wouldn’t skimp.

If you really truly believe you won’t need the space, but you’d like the performance of four drives on the DS414, then you can save around 350 USD by purchasing 4x 2TB drives instead of 4x 4TB.

Your Network Needs Speed

Now, along with all that firepower in the NAS, you need the network to feed that speed addiction.

Get a good quality switch, and if you’re going to use your NAS over wireless check out Amped Wireless RTA 15.  Wired speeds will nearly always be faster, but I like wireless convenience just like you.

You’ll Love Speedy Backups

For extra credit, Apple’s Time Machine backup works really nicely with my NAS.  It works a lot faster when I plug in the ethernet cable.  On a Cisco 2960G switch (yes, I have some serious commercial grade switches lying around), my late model Apple MacBook Pro Retina did around 100 gigs under 15 minutes.

Do I need a NAS in the future?

Possibly not.  When bandwidth gets there and cloud offerings match up at the right price points.

Oh, and a little re-arrangement of the letters NAS … NSA.  User trust!  Yes, all this assumes user trust of cloud services.  Then again, the NSA can probably backdoor your NAS if they really want to.  Sorry.  Nothing’s perfect.

Happy Trails

Your mileage my vary.  My new DS414 was a religious experience.

Why Amazon’s EC2 Outage Should Not Have Mattered

This past week I got a call in the middle of the night from my team that a major web site we operate had gone down. The reason: Amazon’s EC2 service was having issues.

This is the outage that famously interrupted access to web sites ordinarily visited by millions of people, knocked Reddit alternately offline or into an emergency read-only mode for about a day (or more?) and made mention in the Wall Street Journal, MSNBC and other major news outlets.

In the Northern Virginia region where the outage occurred and where we were hosted, Amazon divides the EC2 service into four availability zones. We were unlucky enough to have the most recent copies of crucial data in exactly the wrong availability zone, and this made nearly impossible an immediate graceful fail-over to another zone because the data was not retrievable at the time. Furthermore, we were unable to immediately transition to another region because our AMI’s (Amazon Machine Images) were stuck in the crippled Northern Virginia region and we lacked pre-arranged procedures to migrate services.

While in the works, we had not yet established procedures to migrate to another region. Having some faith in Amazon’s engineering team, we decided to stand pat. Our belief was that by the time we took mitigating measures, Amazon’s services would be back to life anyways. And … that proved to be true to the extent that we needed.

The lessons learned are this:
(1) Replicate your data across multiple Amazon regions
(2) Do 1 with your machine images and configuration
(3) For extra safety, do 1 and 2 with another cloud provider as well
(4) It’s probably a good idea to also do an off-cloud backup

Had we already done just 1 and 2, our downtime would have been measured in minutes, not hours as one of our SA’s flipped a few switches… all WHILE STAYING on Amazon systems. Notice how Amazon’s shopping site never seemed to go down? I suspect they do this.

As for the coverage stating that Amazon is down for a third day and horribly crippled, I can tell you that we are operating around the present issues, are still on Amazon infrastructure and are not significantly impacted at this time. Had we completed implementation of our contingency plans only within Amazon by the time this happened, things would have barely skipped a beat.

So, take the hype about the “Great Amazon Crash of 2011” with a grain of salt. The real lesson is that in today’s cloud contingency planning still counts. Amazon resources providing alternatives in California, Ireland, Tokyo and Singapore have hummed along without a hiccup throughout this time.

If Amazon would make it easier to move or replicate things among regions, this would make implementation of our contingency plans easier. If cloud providers in general could make portability among each other a point and click affair, that would be even better.

Other services such as Amazon’s RDS (Relational Database Service) and Beanstalk rely on EC2 as a sub-component. As such, they were impacted as well. The core issue at Amazon appears to have involved the storage component upon which EC2 increasingly relies upon: EBS. Ultimately, a series of related failures and overload of remaining online systems caused instability across many components within the same data center.

Moving into the future, I would like to see a world where Amazon moves resources automagically across data centers and replicates in multiple regions seamlessly. Also, I question the nature of the storage systems behind the scenes that power things like EBS, and until I have more information it is difficult to comment on their robustness.

Both users and providers of clouds should take steps to get away from reliance on a single data center. Initially, the burden by necessity falls on the cloud’s customers. Over time, providers should develop ways such that global distribution and redundancy happen more seamlessly.

Going higher level, components must be designed to operate as autonomously as possible. If a system goes down in New York City, and a system in London relies upon that system, then London may go down as well. Therefore, a burden also exists to design software and/or infrastructure that carefully take into account all failure or degradation scenarios.

Ruby Developers: Manage a Multi-Gem Project with RuntimeGemIncluder (Experimental Release)

A couple of years ago in the dark ages of Ruby, one created one Gem at a time, hopefully unit tested it and perhaps integrated it into a project.

Every minute change in a Gem could mean painstaking work often doing various builds, includes and/or install steps over and over.  No more!

I created this simple Gem (a Gem itself!) that at run-time builds and installs all Gems in paths matching patterns defined by you.

I invite brave souls to try it out this EXPERIMENTAL release now pending a more thoroughly tested/mature release. Install RuntimeGemIncluder, define some simple configuration in your environment.rb or a similar place and use require as you normally would:

Here’s an example I used to include everything in my NetBeans workspace with JRuby.

Download the Gem from http://rubyforge.org/frs/?group_id=9252

To install, go to the directory where you have downloaded the Gem and type:

gem install runtime-gem-includer-0.0.1.gem

(Soon you may be able to install directly from RubyForge by simply typing ‘gem install runtime-gem-includer‘.)

Some place before you load the rest of your project (like environment.rb if you’re using Rails) insert the following code:

trace_flag = "--trace"
$runtime_gem_includer_config =
:gem_build_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S rake #{trace_flag} gem",
:gem_install_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S gem install",
:gem_uninstall_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S gem uninstall",
:gem_clean_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S rake clean",
:force_rebuild = false,
:gem_source_path_patterns = [ "/home/erictucker/NetBeansProjects/*" ],
:gem_source_path_exclusion_patterns = []
require 'runtime_gem_includer'

If you are using JRuby and would like to just use the defaults, the following code should be sufficient:

$runtime_gem_includer_config =
:gem_source_path_patterns = [ "/home/erictucker/NetBeansProjects/*" ],
:gem_source_path_exclusion_patterns = []
require 'runtime_gem_includer'

Now simply in any source file as you normally would:

require 'my_gem_name'

And you’re off to the races!

Gems are dynamically built and installed at runtime (accomplished by overriding Kernel::require).  Edit everywhere, click run, watch the magic! There may be some applications for this Gem in continuous integration. Rebuilds and reloads of specified Gems should occur during application startup/initialization once per instance/run of your application.

Interested in source, documentation, etc.? http://rtgemincl.rubyforge.org/

My Project – Better Information: It’s Coming

As many close to me know, I have spent the last few years working on a largely stealth project. The original idea hatched in late 2005 on a 25 hour journey to visit a friend in Singapore.

The project remains mostly in stealth, but I will make some public comments.

Broadly speaking, today’s information suffers from intentional and unintentional inaccuracy, bias, incompleteness, inconsistency, inefficient presentation and other problems.

I look to bridge the gap between masses of loosely structured information and usable knowledge. Raw data needs to go to real wisdom in your brain … faster.

To this end, my team has explored many solutions both technological and non-technological.

Stay tuned.