Let’s imagine that yours’ is an agile software development organization comprised of 10 teams of three to eight developers plus all of the other roles each agile team requires.  Our objective is to continually assess the effectiveness of software development personnel against known delivery and quality performance on an epoch by epoch basic, where epoch corresponds to spikes, sprints, and releases.  

 

Let’s further imagine that your Application Lifecycle Management (ALM), Quality Assurance (QA), build pipeline (CI/CD), and source code management (SCM) platforms are able to provide events on configurable business activities that result from team member activity.  Specifically, let’s say that we are currently interested in the following business events by individual.

 

ALM) From Application Lifecycle Management we want to know when the following occurs: 1) Task Defined, 2) Task Assigned, 3) Task Completion, 4) Task Rework Assigned, 5) Daily Task Remaining, 6) Feature Suggestion, and 7) Feature Acceptance.

 

CI/CD) From the build pipeline we want to know: 1) Build Broken, 2) Build Fixed. 

 

QA) From Quality Assurance we want to know when: 1) Bug Reported, and 2) Bug Fixed.

 

SCM) From source code management we want to know when there is a: 1) Pull Request, 2) Push, 3) Pair Programming Contributor, and 4) Commit Reviewer.

 

From these system generated events we can readily glean activity and community traits related to ALM, CI/CD, QA, and SCM.  But that does not tell the whole story.

 

Intangibles for which there is insufficient information might include identification of individuals: 1) most vested in helping others to succeed, 2) willingness to share knowledge, 3) responsive to other’s requests, 4) pleasant to work with or that positively contributes to quality of work-life.  These intangibles and others like them are what gamification can contribute to the data available from ALM, QA, BP, and SCM.  

 

We can understand these intangibles by allowing the team to provide information such as the following about their colleagues at task completion and end-of-day junctures: 1) Helpful, 2) Pleasant Experience, 3) Responsive, 4) Positive Observation, 5) Peer Feature Suggestion Like, 6) Unhelpful, 7) Unpleasant Experience, 8) Unresponsive, and 9) Negative Observation.  These observations land on the ledger of the person that they are about.  Each negative observation (6-8) is accompanied by a “Negative Observation” (9) observation attributed to the person making the observation, but the person making the observation is not otherwise logged so as to provide anonymity as well as to minimize negative observations being made.

 

For each auto-generated and manually contributed event type, the game administrator assigns “debit” points for positive activities and “credit” points for negative activities, according to what the organization values.  

 

For example, the game administrator might give the following values to a few of the above mentioned events.  “Task Defined” debits (positive value) the user’s game journal by one, where, “Task Rework Assigned” would credit (negative value) the user’s game journal by one.  Each of the events can be reassigned value for each game played.

 

The gamification model will keep a running total for each individual as well as each change so as to be able to explain what comprises the total.  An instance of the game runs for each team in alignment with spikes, sprints, and releases.  Refinements can be made in the configuration of each game.  Analytics can be run within and across games.

 

The next post will discuss the TLD model to support Agile Gamification.

 

Magnified Hubble image of Abell 1689 (a dense cluster of galaxies as it was 2.2 billion years ago) to research dark matter in an effort to better understand dark energy—finding creative ways to fill in the gaps of what we do not yet know. (Image credit: NASA/ESA/JPL-Caltech/Yale/CNRS)

The English poet John Donne tells us that “no man is an island.”  In the same way, nothing lives in isolation. So then how do we identify boundaries for microservice implementation and deployment?  Domain Driven Design offers patterns to implement a “bounded context,” itself a pattern, but relies on the business to define the bounded context.  This is as it should be.  The observation to be made here is that the underlying technology with which the bounded context is implemented will determine the rigidity or flexibility of the implementation.

 

Linked Data is conceived to allow dynamic linking between any two resources (URIs).  As a result, irrespective of where the boundary is set to describe the resource, our own Temporal Linked Data® (TLD) makes it easy to reference TLD aggregate as an OWL Object Property link, whether deployed together or apart, the container will resolve the link.  

 

Where relational, columnar, and object data management platforms require these relationships to be accounted for ahead of time, TLD allows these connections to be made along the way, according to well bounded business semantics, referenced as opaque RDF resources.  

 

By decoupling design time concerns from runtime concerns, the iterative and incremental deployment of TLD microservices represents a strategy for iterative and incremental growth of business functionality.

 

The Earth as Well Bounded Microservice in the Universe (NASA Image of the Day)

For my part, I like to know something relevant about the person whose work I am reading.  To this end I thought it would be of interest to relate how our auto-generated Hybrid Transactional / Analytic Processing (HTAP) by way of Temporal Linked Data® (TLD) came about.

 

For approaching 30 years, BRSG Consultants have been business school trained IT Management Consultants and ISV employees, and entrepreneurs.  Our transition from Smalltalk to Java began in late 1996.  We have long preferred technologies that allow us to model the business the way it is, such that business personnel would recognize the abstraction.  Lean and agile business process on the management side, and enterprise distributed object systems on the technology side, have been our calling card.  From a Java perspective, we have focused on operational in-memory IMDG, and Hadoop / Spark analytic platforms for our HTAP efforts.

 

Along the way we have found Semantic Web, Linked Data, and Actor Model concepts to be of interest in combination for their commercial potential.  While this proved to be a heavy lift to implement in a commercially viable way in the Java ecosystem, our efforts to do so became a reality when we turned our attention toward Elixir and the BEAM ecosystem.

 

We have no stones to throw at the great technology that we have used to date.  Nothing has changed there and we are happy to provide services in these areas.  But for our own product development efforts and, if given the option, we have simply found the BEAM ecosystem to be more economically and technically compelling.  

 

We hope that you will find our reasons to be of interest as we share them along our HTAP TLD journey.

 

Modern optics and the Hubble Telescope allow us to see a galaxy 270 million light years away and a Milky Way star only 2500 light years away — what we are able to see this century is not new, just new to us (NASA Image of the Day)

Low-friction data structures are those that: 1) are self-describing, 2) directly represent that which they model, and 3) do not require transformation between operations and analytics.  Our graph-throughout approach to Hybrid Transactional / Analytic Processing system development encourages a direct connection between business and IT where data model creation can be collaborative.  

 

Once the discrete, well-bounded RDF data model is designed, the backend system is generated, ready for: 1) behavioral augmentation where necessary, 2) test, and 3) deploy.  Monolith and command-and-control are out, well-bounded microservice, distributed, and peer-to-peer are in.

 

This low-code approach helps to address the: 1) human resource impact of adopting new technology, 2) communication overhead of translating business concerns to production enterprise software, 3) adaptability of systems to business need, and 4) explicit connection between operational and analytic systems.

 

In contrast to relational and columnar data management which rely on implicit relationships embedded in SQL, Temporal Linked Data® (TLD) makes an explicit connection between temporal data aggregates.  

 

Likewise, where Event Stores keep all data domain events together, TLD keeps all temporal data alongside the aggregate to which it belongs.

 

Explicit connections are both hard and soft, technical and social.  Hybrid Transactional / Analytic Processing by way of TLD explicitly connects:

 

  • Temporal data aggregates,
  • Operational data with analytic data for realtime analysis,
  • Business and IT personnel, and
  • Represents the shortest path between business concept and production deployment.

 

Amundsen Scott South Pole Station, a low-friction location to view the night sky (NSF Public Domain Image)