Our generation has the unique privilege of being able to see back to within a few million years of the beginning of time.  How far back one is able to look depends on the technology that is used, but our Universe’s past is there to be seen.  Temporal Linked Data® (TLD) naturally keeps a record of enterprise data changes to enable another kind of time travel, the story of enterprise data, for both operational and analytic purposes.


This series of weblogs introduces TLD as a transactional, low-code, enterprise class compute and temporal data cluster that naturally projects all writes to a world class big data graph analytics platform such as: 1) third-generation graph database for analysis, machine learning, and explainable artificial intelligence by way of TigerGraph, and / or 2) enterprise knowledge graph, ML, and AI by way of ReactiveCore.


For a high-level understanding we will briefly explore these subjects.


For a more concrete understanding we will use a gamification example, described as follows.


The technological innovation represented by the BEAM ecosystem and third-generation graph database allow for the possibility of building enterprise systems that simultaneously account for operational and analytic concerns.  We look forward to taking this fast-data, big-data, HTAP, Temporal Linked Data® journey with you.


Our Temporal Universe

In a prior post, enterprise data was compared with an “hydraulic data cycle” where analytics would rain insight from precipitation drawn across “all” operational environments.  Rather than analytics being an afterthought, HTAP raises “business guidance” through analytics to first class citizen along side operational systems with realtime, temporal data projections.


Information Technology should view itself as part of and critical to the success of the businesses that it serves by regularly identifying, prioritizing, and delivering on the needs of The Business as enabled by: 1) collaborative and agile processes, and 2) scalable and reliable microservice deployments by way of container orchestration to hybrid on-premise, virtual-private, and public cloud platforms.  


Detail in the Big Picture


The resulting fine grained transactional deployments have to be reconciled against the need for a realtime 360 degree view of current and historic data for analytics and business guidance.


In like fashion, Blue River Systems Group advocates for high-throughput, low-latency, scale-out transactional micro-services that project directly to low-barrier, big data graph platforms for effective, realtime analytics.  Both third-generation Graph Database and enterprise grade inference-engine oriented Knowledge Graphs are candidate targets for realtime transactional data projection resulting in many micro-services contributing to a singular view of the enterprise.


It is time to break down the human, technology, and resource consuming silos created by cobbling together platforms on a case-by-case basis.  Keep the data—and information—flowing rather than allowing it to collect in a silo-induced lake.


BRSG consultants pursue ideas that scale, as well as that hold together well.  Please reach out if we can be of service: info@brsg.io, or call 303.309.6240.


What if we used “hydraulic data cycle” rather than “data lake” as a metaphor for big data?  As a thought experiment, let’s consider the continuous and complex process whereby water on the earth’s surface evaporates, rises into the atmosphere, condenses into rain or snow, and falls back to the surface to water the land, run in rivers, pause in lakes and aquifers, perhaps even find its way back to the oceans, but mostly repeating the cycle.


Super Cell Water Cycle


Enterprise data should be this active for its entire lifecycle, not stagnating in an artificial lake.


Consider that data has two primary jobs: 1) operate the business, and 2) guide the business.  These two functions are distinct but should not be separate.  Transactional systems should work hand-in-hand with analytic platforms to provide operational capability, realtime analysis, historical perspective, training sets for machine learning, and explainable artificial intelligence—augmenting human strengths with high tech capabilities.  This is what BRSG seeks to accomplish with Hybrid Transactional/Analytic Processing (HTAP).


Further, consider that business data structures can—and should—reflect real world objects, thereby helping to reduce artificial barriers between The Business and Information Technology.  From an HTAP perspective, it would be useful if transactional data structures that reflect the business would translate one-for-one to the analytic structures that inform the business, producing analytics that rain insight from precipitation drawn across “all” operational environments.  


This series of blog posts will consider a number of “silo spanning” topics that HTAP poses.  From an enterprise data architecture perspective, we will explore the possibility of utilizing RDF data structures—semantic web and linked data—for high-throughput, low-latency, reliable, supervised transactional systems that naturally project into enterprise graph database and knowledge graph big data platforms.  


The premise for our discussion is that graphs are powerful for both transactions and analytics.  Because they are self-describing, they lend themselves to low-code/no-code, short time to market enterprise solutions.  


BRSG consultants develop ideas that scale such as Hybrid Transactional/Analytic Processing strategies by way of semantic web, temporal linked transactional data, and graph analytic data.  Please reach out if we can be of service: info@brsg.io, or call 303.309.6240.