Big Compute 20
| |

Complexity is why the Innovator has an infinite appetite for computing


This year, we’re co-founding and launching the Big Compute community, and kicking it off with a thought leadership conference in February at SFJAZZ.

 “What is Big Compute exactly? Is it similar to Big Data?”

Well, the Big Compute community is a group of engineers and scientists to discuss techniques to solve problems that either require — or could benefit from — near limitless compute capacity.
We founded the concept from a simple, out-of-the-box question: How would our world change if computing were totally free and infinite in capacity?
Kind of a wild idea, no?
It is not that far-fetched. With instant messaging, email and social media, we have achieved near infinite connectivity. However, I find that most people struggle with concepts of infinity, and that it is generally absent from the outlook of most corporations and institutions. Even highly innovative companies extoll cost reductions, resource management and budgets.
Here’s how that plays out in product development: organizations buy computers or a small datacenter for their engineers and they are constrained by its fixed capacity and architecture flavors. This forces the users into an explorative scenario, where fundamentally the engineer’s challenge is to solve the problem  “How much can we get done with what (little) we have available?“ MacGyver mode: enabled, Innovator mode: disabled.
Almost instantly a market is created of parasitic resource management tools to “help” you. These take the form of layers of middle-management, soul-sucking applications, prioritizers, TCO calculators and worst of all schedulers. At their core, these are nothing more than systematic ways to tell a compute-hungry innovator either “no”, “not now”, “too expensive” or “not a priority.” 
The engineer now waits miserably in a bureaucratic queue.
And that, my friends, is simply not cool. The only thing worse than this stranglehold on innovation is wasting your precious engineering years writing software for counting ads. MacGyver was clever to use a paperclip to pick a lock, but let’s face it… he wasn’t creating flying taxis, supersonic airliners, or building Mars colonies.
Over the last few decades, the computing demand has grown exponentially, and  it isn’t difficult to demonstrate why: Let’s take an aerospace engineer who works for an innovative aircraft company, responsible for the design of a single part. In his job, he uses 6-8 simulation software applications to verify that his design is both strong enough to meet the spec, and light enough to not impact jet fuel cost, and simple enough to be mass produced. 
To run these applications, he can do it with a $10,000 workstation under his desk and all by himself. When the basic design is verified, it’s tempting to think that his job is done, however he’s just getting started. The reality is that he has only been able to address the first key development question: Does this one design meet the basic spec? There is a vast fabric of interdependencies that now could be checked, and with each successive level of analysis comes another multiple of computation (estimated below in core-hours.) 

Who Innovation question Core hours per cycle
Individual Does this one design meet the basic spec? 1 hour
Individual Does it behave as expected in a sub-assembly? 3 hours
Individual Which of the available materials should it be made of? 5 hours
Individual Can it be manufactured inexpensively?
Team Does the subassembly behave in the main assembly? 1 day
Team Is the subassembly able to be installed cheaply? 2 days
Team Are all of the materials compatible? 2 days
Team Is the subassembly able to be maintained? 3 days
Org Does the main assembly pass safety certification within the whole? 5 days
Org Does the overall assembly have vibration, noise or ambience/quality issues? 6 days
Org Can we explore the top 10 configurations for safety and quality? 12 days
Org Can we explore the top 1000 overall design configurations? 22 days

With each successive question, more and more design complexity is incorporated into the analysis and simulations. This  is reassuring to the engineer, but it exponentially drives up the required computing capacity to maintain confidence in the design, and reduce the risk of missing something critical. 
Unfortunately, these misses are frequent and can have massive impact on product cost, quality, safety and delivery schedule. The leadership response from most of these misses is an organizational pledge to test, simulate and validate more as a means to manage the risk → and that means increasing the complexity bar, and thus the net computing demand. 
In contrast to the explorative scenario, the normative scenario asks: What should I use to get exactly what I want? This places no constraints on resources, forces you into eliciting what it is you really want, and transforms the neuron-smashing back into “innovation mode.” In this mode, the “how” is just a means to an end, and it is only the end that matters. 
The hardware doesn’t, and shouldn’t matter. My apologies to any cluster-huggers, server junkies, or scheduler folks reading this… but the era of speeds-and-feeds is soon over.

So why the Big Compute community, and why now?
Short answer: Computing is finally a widespread commodity and demand is (still) rising
We are entering an era where users will have instant access to all of the compute capacity they need for any problem they need to solve. Instant access? All computing? Any problem? This is madness and unsettling, and I think we need to talk about it. The Big Compute community is built therefore to facilitate a re-evaluation of how we make things — and break free of the resource-limited mindset for product development problems.  
With unlimited, free computing (or even nearly so), I imagine there to be two major discussion categories at Big Compute:

  • What is it that you want to do, and why?
    • Untangling of the elicitation puzzle and answering “why did you choose this objective?”
    • What are the innovation drivers behind this choice?
    • How much complexity do you need to address?
  • What was the real world impact of this mindset change, especially to your demand for computing?
    • How does the usage of computing actually change?
    • What kind of computing architecture do BigCompute users prefer?

I hope to see you at Big Compute 20 this coming February 11th. Bring your normative mindset, and your ferocious appetite for solving complex problems that matter.

Similar Posts