Test-Driven DevOps Design – Value Beyond Execution

Some have said “It’s all relative.” If the statement were true, then it could not be proven false. By nature, relativity cannot be proven true or false. Therefore the statement itself cannot be proven to be true.

Others say “It’s all in the execution?” (referring to value). If the statement were true, then Design could not exist. There would be no value possibly created in the design process if all of the value were contained in some form of execution. The value of design is in its ability to show patterns of what is desired. For the purposes of software design I am specifically referring to the value created in patternizing a desired set of functional and non-functional requirements. The design process creates valuable output which may manifest itself as a cohesive implementation such as an appliance or Macbook Pro or Iphone. These are just examples. The design process may also create output which is intended for further execution such as a template. If all value existed in execution, templates would not exist and no one would pay for them. I am not asserting that willingness/ability to pay at a given time is the sole indication of value in today’s markets of often politically driven consumer behaviour (i.e. boycotting a product or service which supports an percieved opposing interest faction.) I would yonder far enough to say that the only thing “all in the execution” is the absence of credit, if that’s a thing. I suppose design plagiarism is just one example of one type of plagiarism which is “all in the execution.” Notice it does not create value, but rather changes the consumer’s perspectives of value in such a way as to fundamentally distract the consumer from the value of knowing the originator of a specific work. If the consumer knew where the true source actually originated from, they may or may not choose to pursue the originator. If the consumer isn’t given the option of knowing where the true source actually originates from… I have to ask “why not?” <3

Design Culture + Test-Driven DevOps = Test Driven DevOps Design

If you are familiar with Test Driven Development then this should be no mystery. Test Driven Development has been around long enough for basic understanding. From a cultural perspective it teaches the acceptance of failure in the first iteration. Once the test passes, further development inevitably occurs. Operators are familiar with similar approaches for test-driven deployment and quality assured change management. When infrastructure is code, the process of deployment becomes more transparent across development and operations groups via test-driven quality assurance processes which are often codified into tools.

In Test-driven DevOps Design the tools are the output of a previous iteration. This is made possible by not just infrastructure-as-code but template-driven-test-automation. On my sabbatical I have identified 4 fundamental design constraints which I have applied to a Domain Specific Language (DSL) known as STRAP (Service-Templates Running A Process) implemented by a hyperscale PaaS known as GitStrapped. The 4 fundamental design constraints also define the domain:


Each STRAP in the GitStrapped hyperscale PaaS follows these constraints to ensure higher velocity with each iteration. I call this USIR-friendly.

Design Culture Adds Value Beyond Execution with Every Push

In design-culture there are often celebrations following a successful release, and when releases happen frequently celebrations may also occur frequently. I have found the velocity itself to be a worthy reward, but I am privileged to be surrounded by others who embrace the culture of design which gives credit to a design contributor any time a contribution is made. I have witnessed value being added back as I become connected to many pioneers in software design due to paying respects to those who came before me. With Test-driven DevOps Design, much execution is automated, reducing and containing errors in parallel execution domains. I don’t miss the “race-conditions” along critical paths, either.

So what does it look like from a Process / Business Perspective?

Test-Driven DevOps Design can be applied to existing project workflows such as Kanban or applied to a Service-oriented Modeling Framework such as the one shown in this diagram by Sean Fox:
Service-oriented Modeling Framework from Coneptual Services to Solution Services diagrammed by Sean R. Fox
This diagram is useful in illustrating one possible application of a Domain Specific Language such as STRAP (Service-Templates Running A Process) atop a hyperscale PaaS such as GitStrapped for Discipline-specific modeling as an iterative process. Sean Fox’s diagram is also useful in illustrating one possible workflow which encapsulates business needs, determined in an analysis process, as part of a longer iterative process where the input is a service concept (or pattern of service-quality desires, requirements and other new ideas) and where discipline may produce a solution of services as output according to design constraints.

Tagged with: , , , , , , , , , , ,
Posted in designing scalable systems, Test-Driven DevOps Design

OnSabbatical – Truth Through Experimentation

Technology Leads the Market

Pioneering Test-Driven DevOps Design has not been all business nor roses. Some of the hard work doesn’t pay off until a way is found to collect. Other times I find myself surrounded by those who dislike the rise of technology or science. For some reason some teams act like they don’t like the idea of independent innovation or design thinking, even when some believe the conventions they follow are the one true way. Some even use a Macbook Pro with Ruby, Vagrant, Chef and Textmate. This is one way. What would Steve Jobs say?

The strong survive and learn to leverage the stigma as validation. It proves the concept that Technology leads the market. Have you achieved a strong technology advantage? If so, never compromise.

Technology Never Compromises

I’m Unnonymous… you know where to find me. You know my name. I have a little bit to hide (access credentials, etc.) but I’m out here on a WordPress blog making fun of those who say they don’t forget or forgive, etc. I forget all the time and I hope to forgive others as I have been forgiven. I’m here to tell you that Technology is not so forgiving. When you get behind on Technology, you keep getting further behind. It’s because of people like me who keep rolling forward with every push.

Prove Concepts To Thyself – Because Designers Don’t Even Need a Plan.

Haters are out here tryna copy the plan before the Design Template has been released. Hold up. The template was designed for you to copy, Pay the price, copy the template, Get Served. If you’re a designer, you don’t need a plan… you probably have it in your subconscious and I bet any plan worth following is quite intuitive to your design as you discover and contrive it. By the time you can comfortably set aside design time to contrive a formal a plan you’re probably late… whether or not that’s fashionable. I don’t want to discuss what your competitors and all of their competitive interest groups will be doing by the time you communicate your plan in a way they admit to understanding your plan. Why share it when a designer only needs to prove the concept to oneself?

The Identification of 4 types of Haters in the Space

Identifying haters is something very easy to do if you ever go OnSabbatical. I could start a meetup for haters at this point. Here are the 4 types who often surround me:

  1. Those who hate the Technology they are forced to consume as the depend on it in their daily lives.
  2. Those who live by making compromises and feel threatened by the empowering culture shift of independent, permissionless innovation (based on innovation’s threat to their existing investments in the infrastructure of statusquo.)
  3. Those who hate volatility and fail to admit that the only constant is change itself in their pursuit of temporary feelings of security.
  4. Some other form of serial compromiser who percieves reality as the forfeiture of ideology.

Being the change. May the Impact be great.

“If we could change ourselves, the tendencies in the world would also change. As a man changes his own nature, so does the attitude of the world change towards him. … We need not wait to see what others do.” – Mahatma Gandhi

Sabbatical – “It must be nice.”

You want to DevOperate like me? Rinse and repeat after me and start learning by doing. Rent-seeking-recognition-seekers and position-takers (trolls) will try to explain to you what you can discover or already discovered on your own. Some might try anything to get you off the discovery path (as though a discovery path is somehow their intellectual turf?). Whether or not they intend to distract you from your discovery, pay little attention to them and lots of attention to your discovery… which they hope to associate themselves with. How else can they be associated to the greatness of experimentation… other than, of course, embracing the scientific method directly. Now it’s my turn to say “That’s not gonna happen.” (Whether or not I can help it) My assertion, however, is speculative, but based on their behaviour as it may be publicly observed for a long period of time.

Before I went OnSabbatical, I used to think that one or two specific vendors and their partners were kryptonite. Whether or not I am DevOpSuperman, I now realize, on my sabbatical, that the attitude of resisting change/technology and the compromises around it are the planet Krypton. Before I overdose on privileges and fancy catered foods and Sonoma grapes, I must just say no to the poison of these pessimisms and roll forward with another experiment. You see, these experiments are sometimes the only way to learn what isn’t taught. There are many things that haven’t been learned yet, and when they are discovered.. what are the chances that the person who discovers them is willing to teach? What are the chances the teaching is accessible? From an engineering perspective, experimentation is often the most direct path to a discovery.

I solve problems of Unpredictable Scale, what do you do?

I’m just a solution provider out here with what might seem to be a personal problem. (Maybe it is and maybe it’s not.) My problem is that I can explain. The plumber doesn’t really explain much, and I don’t know how the plumber feels about a hovering home-maker. I know that I don’t advise other solution providers to explain, if they intend to sell solutions. Instead:

  1. Sell the solution first (if you have to eat non-meetup-pizza too.)
  2. Solve the problem (especially if it’s critical.)
  3. Then explain as a charity if you choose. (Remember there may be more problems out there, Yung SuperDevOp.)

I have explained and described in colour some cultural barriers to independent, permissionless innovation in this article. Thank you for reading / following.

Posted in personal computing, Test-Driven DevOps Design

Test Driven DevOps Design

Asher Bond DevOpsFu
I’m just a solution provider out here with what might be just a personal problem, rather than a professional problem. So what’s the problem? I CAN EXPLAIN. Now I hope you ask, WHAT IS TEST DRIVEN DEVOPS?

Test-Driven DevOps makes you not feel sorry for testers and quality assurers in the world of DevOps. This is because the tester can take a more leading role in the design process. Ultimately those who take down requirements and form a design pattern are leading on design, but I have seen quite a few startups putting the developer in a team of “product developers.” Now I ask them to start a more behaviour-driven (shared tools, shared process, shared development responsibilities) or domain-driven development (connecting implementation to evolving models.) process based on test-driven development which will inevitably fail in the first iteration. Once the test is written, a spec is formed. Let’s take a look at rspec for example. Here is some code which I copied from http://rubydoc.info/gems/rspec-core/frames

# in spec/calculator_spec.rb
describe Calculator do
describe '#add' do
it 'returns the sum of its arguments' do
expect(Calculator.new.add(1, 2)).to eq(3)

When you gem install rspec and run the test, it shall fail in that first iteration.

$ rspec spec/calculator_spec.rb
./spec/calculator_spec.rb:1: uninitialized constant Calculator

The simplest solution:A simple class adds a + b

# in lib/calculator.rb
class Calculator
def add(a,b)
a + b

Add this to the top of the calculator_spec.rb

# in spec/calculator_spec.rb
# - RSpec adds ./lib to the $LOAD_PATH
require "calculator"

Some of you are probably thinking, “So what you can copy and paste an rspec example from the manual.” Sure, anyone can do that. But now thanks to a domain specific language aimed at solving problems of scalability, repeatability, and interoperability.. you can turn what would ordinarily be a PaaS of service-oriented architecture known as @GitStrapped into a multi-PaaS multi-class and roll forward from there, in a DevOpsy fashion. Time for DevOps multi-classing.

From a more operational perspective, as an early-stager in the early days of the commercial Internet, I learned the hard way that change is inevitable. With nothing constant but change, it makes a lot of sense to make changes more manageable by recognizing the inevitability of development in sustainable operations. What operator doesn’t test?

Test Driven DevOps isn’t just a fancier way of differentiating myself from the DevOps fakes, Test Driven DevOps is also a way of articulating the need for operational intelligence to include test results. If you don’t know how to accurately manage the resources it could be that you don’t know how to accurately measure results or costs in some cases. This doesn’t mean you suck, necessarily. It could mean that you have ventured into uncharted waters and now you want to see if you can get a horse to drink.

Once test results from the production environment form a cohesive summary of insights, an operations manager can begin to dish out a new set of service level agreements to audiences where he or she is raising the bar for what system quality means.

If the operator is an IT operator, he or she can use this Test Driven DevOps approach to reduce costs in his or her organization. If he or she manages an old enough shop, chances are they were already doing it and didn’t know how to explain it the way I do with tomorrow’s pre-freshness. Think about that the next time you configure, make, make install. When I toss an apple it might taste like it ain’t ripe, but if you can digest it, then you find out what kind of stomach you have for the tree that apple didn’t fall far from.

Money might not grow on trees, but I’m here to tell you that code really does almost always grow on a tree of some kind. Now you still have to water it… and developers drink beer that’s not necessarily the same PBR you can feed to the rest of our hipster community. You can lead a horse to water, but if it’s dead don’t kick it and expect to get any code flourishment from the kick itself.

The process of test driven devops empowers an operations manager or engineer and generally anyone in operations.. because the process makes the changer think about the impact of the next change in every iteration. It’s not rocket science, but how much you wanna bet I bet they use this technique before a safe launch?

The other thing even the most achieving highly profiled young developer could get out of doing Test Driven DevOps for real is all those insights into things you couldn’t learn within the constraints of confidentiality or half-published information out there.. plus learning the things that maybe you can’t so easily learn in school. In fact you also learn the things you cannot possibly learn in even the finest schools with the strictest regiment of competitive academics in their cirriculum.. because the damn things you’re discovering in this damn test driven process are the damn things no one has discovered damn yet, damnit! If that’s not dangerous, then not testing probably is. So there could be a decision about what kind of danger you decide to present these domains as alternatives. My opinion is to build trust and do Test Driven DevOps unless you have a reason not to. That will minimize the risks to all domains, I’m guessing, but if you need a guarantee I know where to find one and how to make them.. after working at Elastic Provisioner and delivering the Elastic Promise since 2010 or 2011 around that time. For those of you who haven’t heard, the Elastic Promise scales on demand because the client can sponsor the guarantee via TDD (following a thorough analysis of business objectives and of course requirements).

Besides if you’re not doing DevOps, why not?

Someone who favors predictions won’t allow you to see what happens when the experiment is complete?


Autonomous DevOps at Scale and Software Defined Networking

After the privilege of attending the Open-Network Summit (which is a conference discussing open-standards around Software Defined Networking… in case you thought it was a bunch of LinkedIn Open Networkers) I heard Vint Cerf briefly discuss the concept of permissionless innovation. Software defined networking can be a strong enabler of permissionless innovation for DevOps, because it exposes the dynamic network capabilities via REST or other APIs. When I think of DevOps at Scale, I think of permissionless innovation as a culture which networks across domains and that’s rather disruptive to systemic monolithia.

A lot longer than you might think in that first iteration.

Shunichiro Tejima, NEC was asked during Questions and Answers what was the hardest part of implementing Software Defined Networking. The answer was the first iteration. Once you get past it, further development, implementation and innovation start to move with much more velocity. It’s because you repeat success from a coded template rather than trying to remember what was done before, what worked and what didn’t.

Enterprise is already somewhat familiar with the concept of a service-catalog. I use this to keep track of capabilities and they can be called by a developer’s native tongue.

Tagged with: , , , , , , , , , ,
Posted in Test-Driven DevOps Design

Hide IT under a bushel, NO! I’m gonna let IT shine.

Technical Industry Growth Infographic Showing new jobs in Network Architecture and IT Services
Infographic taken from Westwood College’s Blog.

When was in sunday school as a young boy there was a song we used to sing about Christian Evangelism and it went like this, “Hide it under a bushel, NO! I’m gonna let it shine!” and it was about sharing the gospel message of Jesus (the good news of eternal salvation through the grace of God’s son’s sacrifice). I felt like the cloud at one point was a saving grace that many IT managers didn’t accept when they heard the gospel from the technical evangelist. Have consumer protectionists colluded and boycotted IT? No waaay, it woulda been in consumer reports.. Is that why cost reduction is such a focus? Because IT was/is generally profitable right?

Making IT last first was never about making sure that IT could survive when people pretended that it wasn’t returning $1.9 to every $1 spent (generally).







THEN TEST THAT, PRIOR TO MAKING A GUARANTEE, They can pay as they go for a guarantee.


Making IT last first was more than just a way of getting IT managers to think about DevOps and paying off technical debt in advance through investment in process, it’s also a clever way of saying, we know about the secret IT farms you’re trying to grow because we grew the original ones back when IT (or just technical staff in general) was so disunified that it couldn’t be called a single thing. So making IT last first is a clever way of saying that if they want to get out there and grow a secret IT farm (as if it’s some illegal growing operation and they don’t even know how to work the lights or electricity). We know the game because before there were computers there were typewriters or before there was IT there were webmasters, R&D, engineers, etc.

So I don’t care too much, in terms of what it takes to write this blog article, what those who quickly say they “aren’t doing IT” think their domain is.

I decided to register minimum-viability.com, and maximum-viability.com, but marginal-viability.com is someone else’s domain. There’s a huge margin, but there will be more where that came from since ideas and the best backs and brightest minds are involved and generally connected by philosophy or ideals enough to reduce the overhead of negotiations, from administrative / deal-flow perspectives. No one panic, you have to be a little bit dangerous at minimum-viability these days to hit maximum-viability.

A lot of folks get out there and want to pretend like they aren’t doing IT. One of my best friends / favorite competitors has a habit of joshin’ me with the ole “Ur not really doing it!” LOL! I love that one. The pessimism is the poison, but we’ve gotta earn immunity somehow. Plus I know what you’re really up to. I know you want IT. I saw you looking at a DNS zone file and I know you’re IT curious. You even know what an MX record is. And yet for some reason, some of us make more DNS changes in one day than we make trips to the bathroom. Someone should write a blog “everything is a DNS problem” but it’s really about smart change management and Test Driven DevOps.

Just so you know, some of us hyperscalers have been around since you could win FarmVille by growing the biggest grid of trees, and some of us have been around since mainframes were the way to consume computing. Some of us have been around since abacuses (or abaci in case you don’t know what I’m talking about) were used by professionals to make calculations based on bit shifting. Have times changed, friends? I suggest you take a look at keeping the competition friendly, then release your source code sooner rather than later/never. Better late than never. N’ all ya’ll first comers.. newcomers are on the way and there’s more ideas where that came from.

Having written all that, you don’t want to expose everything in an IT operation because the part that’s not service-oriented is generally involving access controls or identity management. You want to expose the part that everyone else thinks is a secret but it’s the same secret.

Tagged with: , , ,
Posted in Agile Development

Sub-semaphoric Parallel Execution Domains

DSOC Orchestrator TM - DISTRIBUTED SYSTEMS ON CHIPS - Designed by Asher Bond for Elastic Provisioner, Inc.

GitStrapped provides a linearizable sub-semaphore for each STRAP (service-template-running-a-process).

Each sub-semaphore is responsible for eliminating race conditions as applications compete for device, network, and processor resources.

Sub-semaphores appear to the application user as a kernel, but sub-semaphores are not necessarily sub-kernels.

Sub-semaphores may run under a microkernel in userspace or they may be dedicated to a physical device, functioning as a monolithic kernel.

Sub-semaphores are not to be confused with a subset of mutual excluders (mutexes). Although a binary semaphore limits access to a single resource, making it shareable… semaphores may be constructed in such a way to allow for parallel execution across multiple processors.

Are you trying to say that a mutex is a type of semaphore? Yeah kinda… I’m saying that a mutex is an implementation of a binary semaphore, which at a given moment either provides application user(s) access to the (assumed to be limited) resource or it does not provide access to the resource. The mutex can run under a semaphore as a sub-semaphore in order to exclude when exclusion makes more sense, from a resource perspective.

In a parallel execution domain a principal semaphore may control sub-semaphores across nodes in a distributed system such as a parent semaphore controlling cluster of sub-semaphoric kernels or pseudokernels.

Cooperative multi-threading per node reduces overhead between threads if an application is sensible about its resource consumption. Cooperative multi-threading can exist in a subkernel where a superkernel manages subkernels with a pre-emptive scheme that slices up timeslots according to distributed resource availability.

You might be thinking “oh but the network bandwidth is such a bottleneck” which is why a supersemaphore instructs subsemaphoric pseudokernels to either pre-emptively slice time or give processes permission to access the hardware resources until a child exits or Elvis has left the building.

Posted in cloud computing, designing scalable systems

PPLvPIPA: Elastic Provisioner takes a stand against SOPA, PIPA, and online civil liberty infringement at large.

The Internet has gone on strike.

Republicans and other cultural conservatives feared the day the Internet at large and all its nodes would simultaneously collapse… we haven’t seen that happen yet, but we have seen industry collaborators get on the same page politically and take a stand against the war on civil liberties in the form of PIPA and SOPA. Who is supporting this terrible attempt at law-making? Those who don’t know that Intellectual property expires if you don’t maintain or share it, and some folks are trying to legislate their way out of that fact. The rest of us are taking a stand for what we believe is true. Information wants to be free, if you let it… and when you put a price on it, you’re essentially putting a price on your own head. The price of a troll’s head, technically.

people against PIPA and SOPA


It’s no longer a secret that Elastic Provisioner, clients, partners, et. al. unanimously oppose legislation which attempts to control that which cannot be controlled… at the expense of the civil liberties which USA and other countries fought so hard for. In “america” we thought we were going to win some kind of war against drugs, but we were on them.. so we made peace with them. Now the war seems to be against online piracy. The same problem exists as before… people want drugs and pirated software, and those people are citizens… so it’s going to be very hard to control their behaviours. In fact, if we’re going to control them we’re all going to have to give up our privacy and innovation… just a little. Is it worth it? not really. The fact is that some people would prefer to make the Internet back into tubes again. I have decided to take a stand against it, and you’ll probably hear about it on twitter if you’re tuned into #STOPSOPA or #NOPIPASF or if you are attending The Silicon Valley PIPA and SOPA protest, brought to you by Hackers and Founders. If you’re taking computing personally, join us!

– Asher Bond

@PPLvPIPA: What sites are taking a stand against online erosion of civil liberties in the form of PIPA and SOPA?

Google, Inc.

Google blacks out to protest PIPA and SOPA

WikiMedia / Wikipedia

Jimmy Wales and Wikimedia oppose PIPA and SOPA measures
Jimmy Wales has annouced that Wikipedia is abstaining from normal behaviour on this day of activism… unfortunately not in those words, but his were probably better. What I love about their implementation of the blackout is that their site appears to work normally until you submit a search query or click a topic, then all pages redirect to the blackout page.

The opposing political factions attack our Twitter account

Attackers reset the password on our Twitter account and falsely report spam to temporarily block access.

This is from Tony Baldwin (one of my Diaspora friends).
If SOPA passes and we post pirate bay links on senate.gov... WOULD THEY SHUT DOWN THEIR OWN WEBSITE?!?!?!?

Tagged with: , , , ,
Posted in eCommerce, personal computing

Dynamic-Periphery.com by McDevOps – You can take it with you.

Just got back from International CES. Nice to see some familiar faces and meet many new people! McDevOps makes computers for DevOps. The newest computer we’re working on is called Dynamic-PeripheryTM. Unsatisfied with the one-to-one constraint of personal computing, we decided that a workstation isn’t a personal computer. One workstation, powered by supercomputers, could be accessed by many tablets. But we couldn’t just use any tablets, we needed dynamic periphery. This means that one user may use several tablets in order to have a more tailored user experience, and be able to send their user experience to another user. Portable user experience is one of the most exciting features of cloud-based virtual desktop infrastructure. So we took it a step further and began designing specialized tablets for use with desktop supercomputing workstations. It just makes more sense in today’s software engineering, video production, and enthusiast gaming environments. After all, if DevOps culture doesn’t constrain what-could-be by what-is, then why should hardware constrain platform service software? We think it’s also better to have a consistent user experience in development and production and we think a common yet flexible software framework (for example prototype-friendly structure programming in Dart or open PaaS frameworks like CloudFoundry and OpenShift) facilitates this efficiency in many software engineering practices. We’re also excited about companies like Canonical who have commited to providing top-notch long-term support for service-orchestration frameworks.

Dog-fooding the Supercomputer

But honestly, localized computation is only half of the fun of cloud VDI. We really wanted to rock the portable UX over the internet globally. And that’s doable with a McDevOps microcloud account (contact me if you want an invite), whether or not you roll your own microcloud. Microcloud accounts will be free for engineers, developers, designers, and devops culturists… and in general free for anyone looking for work or something to hack on. But it’s not just a SaaS model, it’s a PaaS model from a software perspective. From the hardware perspective it’s a gateway appliance taking you through the pearly gates to supercomputing heaven in the cloud. Desktops are a heavy workload in and of themselves, especially in the aggregate. The problem with all the cloud hype in consumer electronics or “personal cloud” is that they’ve gotten away from cloud computing’s future value. The future value of cloud computing is that it offers scalability. As Dave Nielsen says Cloud computing is OSSM (On Demand, Scalable, Self-serviceable, and Measureable), and I say it’s OSSAM (adding Automation which is implied in every letter of OSSM)…. consumer electronics manufacturers haven’t really delivered the scalability components, but rather what seems to be an overprovisioned appliance or box. The cloud is not a box, nor a puppet show, but maybe more like a vending machine. Get served.

We might be engineers or developers but we’re often not a this-or-a-that we’re often both. And I think in DevOps culture this is the case. I think it’s also the case that a desktop hybrid microcloud can handle heavier video production workloads much better than a beefed up mac (request demo), due to parallel elastic provision at hyperscale supporting rendering workloads for example. And that’s just one example because rendering is just one video production workload. And when these guys get bored they play LAN parties which works really nicely with a desktop microcloud in your cube farm or wherever.

So think how software engineers play with supercomputers while video producers play 3-D shooters. It’s a competition, but for practical purposes the same infrastructure is used to prove the concept that collaboration is like competition on steroids… especially when you can use the same tools and share the same big data insights.

So at CES this year it really seemed as though cloud either meant wireless or SAN or NAS… but I think cloud storage is a nice low hanging fruit. Cloud persistence is the other benefit of microcloud. It’s a gateway to public utility persistence of files. So it takes the load off your tablets and keeps things locally accessible via ultra high speed bandwidth while it slowly persists remotely in heaven… eventually consistent and redundantly persistent… You can take it with you.

Tagged with: , , , , , , , ,
Posted in cloud computing, designing scalable systems, Test-Driven DevOps Design, Virtualization

The CLOUD is real… now what?

The CLOUD is upon you

In 2011 many were still wondering if the CLOUD really meant anything in terms of technology, dollars, and or cents. Looking back on 2011 all I can see is a whirl of nebulocity surrounding what-is with what-could-be. Here’s what I think might change significantly in the next 12 months or so:

The CLOUD is real… WHERE’S MINE?

Ok, so we’ve seen people make money off cloud… now I want one. Go build me my own thing that makes money too. Make it look like the King and maybe the King will be forced to buy it… I mean there can’t be 3 kings can there? So now that 2012 is almost here… people are realizing the cloud isn’t just a nebulous swirl of vapor-ware… now let’s start the ASP second chance foundation. Do I need a license for that? I think there will be a lot of opportunities to abstract licenses with SaaS deliveries. Some may exploit the gimmicks that should not have been codified into the licenses in the first place. What comes around goes around, but by now the only ISVs who are likely to be affected by it are the monolithicly most comprehensive solution providers who claim they invented everything. Invention by consolidation should be on the rise in 2012, by the way, I’m guessing.

ASP Second Life

Application service providers were right. Applications can often be served better warm, with human love. At minimum viability, a product contains at least one service component. Automation is great, but services contain humans and humans contain human error. Consumers love to cut out the middle-man, but once they’ve made all their man-in-the-middle attacks and all their paper dolls of sliced and diced middle-men they realize that they want service. So they go to http://asherbond.com/contact and ask for technical advice. Anyone who knows Second Life (or other virtual realities) knows that people like to design things and build things themselves. But if you’re going to build a cloud please ask yourself where the economies-of-scale exist. Now that the technology concepts have been proven in business practice many more customers are going to ask for cloud service, but what they’re really actually asking for is people (sometimes via a RESTful API).

The difference between application services and software-as-a-service is abstraction measured by a degree of multi-tenancy.


They thought regulations and compliance “hurdles” created jobs… and they were right… in the short term… but what they might have missed is that it also creates jobs for service providers who can broker emerging technology as a service.

Business-Process-as-a-Service (#BPaaS)

What kinda cloud u talking bout? We got SaaS BPaaS and my personal favorite: GSaaS. GSaaS loves you brother. Now let me show you how to run your business. I expect to hear a lot of “what kinda PaaS” from developers and a lot of ooooo aaaah from business process practitioners… but the process consultants deserve a chance to really shine and this is it. I got my developer card revoked a couple times for saying “Cloud is SOA” but I got a new one from VeriSign and now I think developers are starting to be cool about it now that they realize that OASIS was right and that so was I since I said so too, neh. The first guy who raked my graphic depictions over the campfire did admit however, “yeah ok man.. i guess if you’re talking about REST.” So it turns out predictions in 2010 were accurate. I think service-component architecture and visual programming are going to play a role in RESTful integration as software components are service-oriented. I strongly expect scalability requirements and cloud-readiness motivators to stir the pot. Service-orientation is inevitable when technology is applied. Developers are empowered as decision makers and technical advisors, so maybe they would be interested in subscribing to business-process-as-a-service since they have more of a technical focus.

The most COMPREHENSIVE solution – brought to you by the Federated Association of Governing Consolidators

So what if you’re an investor and you buy and sell technology securities and you want some of that good old fashioned ROI. How can you make any money in this cloud biz now that the developers are taking over? Oh yeah there’s this little thing called the most COMPREHENSIVE solution. Big comprehensive, little solution. That’s right folks. The time is NOW. Buy everything. Your cloud portfolio is about to make it rain, but before you buy everything… you have to know how this stuff works and what it does. Haha just joking… now back to our regularly consolidated program… I think in 2012 we might continue to see enterprisey comprehensive solution providers trying to convince people that they are the box you can put your cloud into… or are they more of a comprehensive solution “cloud” that spans actual clouds with meaningful definitions which exist in actual physical datacenters? Who gives these large enterprisey comprehensive solution providers the authority to do this? The customer lets them get away with it because they sponsor industry events and they are often older companies who played a role in many of the technologies that end up as cloud. They equivocate between distribution models of cloud computing, for example… they might get behind the technology curve doing tons of non-emerging has-been-mature-for-a-decade-or-so SaaS business then pretend they are powering IaaS today on a public scale… when the emerging technologies are PaaS based.

DevOps as more of a cultural paradigm shift and movement and less of a title

People are going to start either killing each other based on their choice of configuration management / automation framework or they are going to start getting along more and not putting DevOps in their title unless it has Engineer at the end of it and Lead in the the front of it. Designers are going to be constrained by tighter iterations and Ops are going to punch developers just because they haven’t been punched before and everyone goes through it.


In the old days, developers could be divided and conquered by business managers much more easily. The days of developers having a great idea that no one understands are not over… but “I don’t understand how this stuff works” is no longer an excuse now that we have so many services available. If you don’t know how something works… just ask… only now… you don’t even have to ask how to do it, you can ask for service. If you don’t know how something works, that something might be new and valuable. Dustin said it already, but I think public offerers are going to focus more on influencing the decisions of software developers. Software developers represent change in the direction of requirements and demands… not just whatever seems wanted right now… I think developers often try to guess (like Steve Jobs R.I.P.) what people need since they’re probably going to want that eventually. I could probably guess that a pregnant mom is going to be in the market for diapers sooner or later. Hopefully sooner rather than later. Developers are in the early stages from cradle to grave. They iterate through software development and application life cycles and deliver features based on requirements. Those features become part of a common framework that can be offered more publicly. It’s not new, but software vendors love to put developers on their platforms. What’s new is that developers are not-so-divided and not-so-conquered… so they probably demand a higher degree of ubiquity in their distribution channels… so they probably demand a higher degree of interoperability in their language frameworks.

Applications are most portable when the target distribution platform is based on open-standards.

Public Platform-as-a-Service (PaaS) Top Doggery

Not everyone can be King of the Hill, but I think there’s room for a whole circle of winners in the market segment of public PaaS. We have seen 3 generations of public platform service offerings to developers:

Totally Rigidly Arcane PaaS

The first platform services with public offerings forced the developer to conform to a proprietary framework. The back end was a confidential operation delivered as a multi-tenant service to subscribers who learned how to conform to the proprietary framework. The framework may have been based on python or java, but constrained the developer to the platform of implementation rather than the standards of the enabling technologies within.

Still-exploiting-the-constraint PaaS

This type of platform is built secretly and operates as a proprietary service, but relies on open-source components to deliver services which are mostly compliant with open-standards. A true language is always an open-standard.

Open PaaS – as it should be

Third generation platform services are completely portable. This type of middle-ware essentially replaces the role of the “operating system” as a software component with “systems-in-operation” instantiated as objects by a framework of classes delivered as a platform of services for developers to build things on top of. The distribution model allows for services to be delivered with scalability, flexibility, interoperability, high availability and the distribution model also allows for platform portability and application interoperability by default. The evolution of service-component architecture (SCA) and visual programming may also influence the adoption of visual programming in the cloud as practical users are abstracted by service and frictionless design becomes the practice.

Next Generation PaaS+

I think of PaaS+ as a value-added platform-as-a-service which may include business processes as a service or may include additional DevOps tooling or methodologies-as-a-service (MaaS?) whatever… The framework (tool) teaches you the process. In a toolcloud you might experience something like a toolbox… for example when you’re using Gmail, you realize that Gmail is a Google approach to email… it’s not just an “email program” … so you get some agility along with the nebulocity of the cloudy SaaSfulness. So I think that the next generation PaaS+ will need to put their pluses on by adding some kind of business or other practical high level value. Some of this high level value can be delivered in the form of integration. Cloudbees has moved forward with their initiative to add continuous integration via Jenkins/Hudson integrated service components in their PaaS offering. I think DevOps toolclouds will emerge via the PaaS delivery model and that like Cloudbees other cloud service providers who have a PaaS offering may choose to offer a chocolate or strawberry new flavor of PaaS for Dev and possibly a vanilla PaaS for their long term support in production interoperability and highly available portability PaaSes. I guess Leiloo Dallas could call that one a multi-PaaS just in time to kiss Korbin and save the world before New Years.

Predictive Monitoring and SLAs

Predictive monitoring tools will leverage Hadoop and other big data / analytics. The abstraction of data itself may become an abstract business-process-as-a-service and drive innovation in system performance as SLA’s are enforced and predictive deep monitoring tools allow autonomous and dynamic autoscaling of instances in resource pools.

Resource Pool Expansion and Utility Computing Commodotitization

I think the price of public cloud will start to look like a true utility and come down quite a bit. Companies like Amazon Web Services probably would lower their prices is the demand wasn’t way too high. When more IaaS vendors such as Rackspace, Opsource, Datapipe, et al.. enter the space (they’re already here) and start to compete for customers, the price of raw x86 compatible IaaS should come down quite a bit and make people re-think their hybrid strategies. For now, many organizations may benefit from a flexible hybrid cloud strategy that (for example) may leverage their existing infrastructure to orchestrate public cloud services.

Security implications of Cloud Computing

Cloud computing lowers the barriers to entry by people who ordinarily could not access high performance clusters of nodes to do complex brute-force math research on your “encrypted” password… or just fire up an array of nodes and aim it at the ssh port. Nothing they couldn’t do in the old days of dark matter / botnet clouds. What IP address did that come from? A leased one in a classy datacenter. I think public cloud providers are going to become very security-savvy (actually they really are top notch in most cases). It will be interesting to see how they empower themselves from the big data + hypervisor perspective.

Rinse that CLOUD out ‘cha mouth boy!

At some point… analysts are saying that there is a “hype cycle” in which cloud word sentiment shall become stale. The word cloud will either become ultra-ubiquitous like industry insiders are saying… or it may become a bit blase.. numb from the excessive nebulocity of smoke and mirrors becoming clouds too. I think if we can refrain from partying too hard it might help. Happy new years eve. Be responsible and make backups.

Tagged with: , , , , , , , ,
Posted in cloud computing, Test-Driven DevOps Design

Geospace in Social Context

Social networking software such as Diaspora‘s Aspects and now also Google+ Circles is becoming aware of users’ social contexts. Contextually-aware social networks understand, adapt to, and ideally leverage the information which humans use to relate to one another. Because humans often interact via ad-hoc channels of communication, it’s often valuable for software to adapt to these dynamic channels which have been recently created or destroyed between communicating humans. By abstracting application logic and narrowing the scope of data queries, software controllers can present more relevant views to a user based on dynamic user input or system intelligence. Often, for mobile users, social contexts are geographically based. Without requiring manual user input, software can behave more intelligently in cases where the geographic context narrows or broadens the scope of a dataset.

Is Context King?

A global positioning system in a networked mobile device can provide a narrower data scope (based on coordinate tracking) to an event handler or software controller which queries a cache of temporal data or even a larger set of persistent data. When focused views of user defined human-relational data sets intersect dynamic geospacial coordinates, systems can more efficiently learn how to provide more relevant information about changing social contexts. Moreover, this process can be done with less manual input by users. As a mobile user moves through geographic space, his or her social context may change based on the absence or presence of other people. As distributed systems and social networks become more aware of frequently changing, subtle, geographic social contexts… it becomes increasingly possible (assuming the information is shared with users) for users to find places and people based on their interests. From a social perspective, it’s really quite empowering to have this much abstraction between a venue and a place. There are a lot of “cloud” companies around these days, but in order to really specialize in abstraction one must understand what becomes the focus instead of that which we abstract. The focus is where the actual power (value) comes from. In this case the power is in the ability to become less reliant upon physical, geographic constraints and more focused on social interests (whether more or less sophisticated) and more focused on relateable interactions.

For example, if Barack Obama is in town, perhaps I’d like to invite him as a guest to an upcoming local or regional event. If not, perhaps I’d like the system to invite the next person who might be interested (in this example… Tom Anderson, my first Myspace friend). The system can know (often based on what’s probably voluntary user input) that Rick Perry isn’t as interested in this particular event (perry-wink)… even though both Rick Perry and Barack Obama may exist in the same or similar regional space at the same time. But fortunately I still have my friend Tom. The converging technologies (big/aggregate data abstraction, mobile computing, geospacial contextualization, and social contextualization) are not so new when standing alone, but when integrated support a trend of social sophistication which is more agnostic of physical infrastructure and places where it exists. If I had to summarize this sophistication in one word it would probably be “freedom.”

Skeptics might argue that they don’t like the idea of software or systems knowing more about them. Skeptics might also argue that they don’t like knowing << unpleasant fact(s) >>. Often, an unpleasantness (fear) is associated with aggregation of knowledge within a technological or otherwise sophisticated framework. You could call it a “fear of singularity” or “fear of robots taking children somewhere else” … but this year and probably next year I’m likely to be more afraid of ignorance than artificial, collective, or social intelligence. Knowledge is power, computers are tools, and the more they know about us the more we can know about ourselves. Privacy is important and made possible if encryption and private ownership of data is made possible. You are free to navigate your social context. Is your data? Is “your” data your data?

Peep the Context
Peep the Context.

Tagged with: , , , , , , , , , , , ,
Posted in database architecture and data modeling, social networks

The Evolving Definition of Cloud Computing

Different working-groups have defined and re-defined cloud computing over the last few years. Peter Mell, Timothy Grance, Murugiah Souppaya, Lee Badger and other brilliant minds working together with the NIST (National Institute of Standards and Technology) have drafted a document characterizing cloud computing as:

On-demand self-service

A consumer can unilaterally provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human
interaction with each service’s provider.

Broad network access

Capabilities are available over the network and accessed through standard
mechanisms that promote use by heterogeneous thin or thick client platforms (e.g.,
mobile phones, laptops, and PDAs).

Resource pooling

The provider’s computing resources are pooled to serve multiple consumers
using a multi-tenant model, with different physical and virtual resources dynamically
assigned and reassigned according to consumer demand. There is a sense of location
independence in that the customer generally has no control or knowledge over the exact
location of the provided resources but may be able to specify location at a higher level of
abstraction (e.g., country, state, or datacenter). Examples of resources include storage,
processing, memory, network bandwidth, and virtual machines.

Rapid elasticity

Capabilities can be rapidly and elastically provisioned, in some cases
automatically, to quickly scale out, and rapidly released to quickly scale in. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can
be purchased in any quantity at any time.

Measured Service

Cloud systems automatically control and optimize resource use by leveraging
a metering capability at some level of abstraction appropriate to the type of service (e.g.,
storage, processing, bandwidth, and active user accounts). Resource usage can be
monitored, controlled, and reported, providing transparency for both the provider and
consumer of the utilized service.

After attending a few Cloud Camp events I had the privilege of discussing what cloud computing is with Dave Nielsen. Here is the OSSM (pronounced awesome) CloudCamp definition of cloud computing Dave Nielsen has been presenting:

On-demand: the server is already setup and ready to be deployed
Self-service: the customer chooses what they want, when they want it
Scalable: the customer can choose how much they want and ramp up if necessary
Measureable: there’s metering/reporting so you know you are getting what you pay for

While I really couldn’t dispute the awesomeness of Dave’s definition, I challenged it. I just felt the need to add automation to the definition. Here is the OSSAM definition I came up with:


Architecture is implemented by an operating framework that allows for rapid elasticity. This framework determines which hardware and software resources are required to meet a range of service-level agreements and subscriber (i.e. customer) expectations.


Scalability is achieved on the back end via tight and loose coupling of hardware resources, orchestrated to meet the changing demands of different use cases. A grid may provide the computational and storage resources or a network of edge caching servers may provide content distribution. Many public cloud providers offer both. On the front end, virtual and paravirtual machinery provides subscriber-facing service nodes powered by an elastic hardware and network infrastructure layer (resource pool) of computational nodes and storage area networks on the back end. Vertical scaling is limited to the capacity of one piece of today’s best hardware, but cloud scalability means that arrays of nodes can be offered a service layer or unit… which provides horizontal scalability, rapidly on demand.


A multi-tenant framework must exist to provide at least two tiers of service layers. The underlying infrastructure tier represents technical operations and the top tier(s) represent one or more abstracted service layer(s) … oriented to providing services specific to an applied scope of operations. (For example, a business use case for a particular department or organizational unit). Self-service also means that you have the ability to manage your own service layer(s) if that is how you, the subscriber, decide to provision your resources. For example, if someone is a systems administrator he or she may decide to provision a computer in the cloud with or without a managed operating system or with or without the management of a software library layer. Commodity virtual-machinery which is unmanaged is an example of this, but certainly a fully managed virtual machine could fall under the category of ‘self-service’ if the subscriber tells the API to provide them a managed virtual machine. This leads to my bastardization of the Cloudcamp definition…


There must be some degree of automation for cloud computing to be a true “vending machine” and more than just a puppet show. When an order is placed, human resources, engine-squirrels, or monkeys must not be employed to carry out the provisioning of services… no matter how rapidly they may be able to provide services. The system must automatically, via an API be able to mechanically provide services within the range of some kind of service-level agreement. While it’s arguable that this is part of “on-demand” services, I think it’s worth making a distinction. The importance is that on a large scale, on-demand services can’t exist without automation. Fail-over at the hardware level, for example, may not be required for cloud computing to be defined… but fail-over is a crucial piece of the puzzle if storage and compute nodes within a cluster are to provide reliable and sustainable support for complex layers of virtualization.


Metering and reporting is important not only for billing in public cloud service implementations, but also in private clouds which service enterprise departments. Measurement provides a quantitative analysis of resource utilization and allows for more efficient use of computational resources. On the “front-end” metering tells subscribers how much resources they are consuming and on the “back-end” measurement should also tell infrastructure operators when to add hardware resources to the computational / storage grid. With proper fail-over, automation and orchestration these resources should be highly available and a monitoring system should measure that availability.

Tagged with: , , , , , , ,
Posted in cloud computing