In our previous article, we discussed the importance of using Git version control when developing TM1 models, especially when working in a development team of several people. We mentioned that the tm1project.json file acts as a set of rules that map the server state (schema and data) to Git. In this post, we would like to share how our own tm1.json methodology works.
In order to link our TM1 model to any Git tracking system we need a TM1Project.json configuration file. The challenge in TM1 Git tracking is that we want to manage all the components that make up the schema of a database, including schema information, the content of objects, and - in some cases - object data content and code parts defined in the schema (rules, ETL, TI processes) too. In addition, we define CI/CD steps before and after Git commands, related to schema changes and track changes to our model’s server configuration.
The TM1project.json is essentially a custom set of declarative rules to instruct our model via the Git endpoint of the TM1 REST API: which objects do we want to track and how we want to track them in Git, what process steps we want to perform when pushing to Git or pulling from Git, and which system settings we want to handle.
In the tm1project configuration file, you can list in a simple JSON structure which TM1 objects you want to include or exclude from your Git tracking, define which CI/CD tasks you want to perform, or which additional data assets you want to Git track. Within a TM1project.JSON, you can distinguish between the different system environments (dev/test/prod) of a given model, thus making a sophisticated distinction between the different CI/CD stages.
Different TM1 object types require different strategies when tracking in Git
TM1 Git does not delete objects, we have to take care of that in a TI or some other way. That is, if an object is deleted it will be removed from the git repository, but it will not be deleted from the model if we don't ensure this by some action ourselves!
{ "Version": "1.0", // Git tracked custom files, this folder found under the model data directories and we separate // different use case: "Files": [ // git controlled datasets, these are mostly small csv-s which have business configuration // data or parameters like volume curves, user settings, etc which cannot be sourced via a general // ETL pipeline and are maintained by usually business users but we want a trace of changes "GitControlledDataSet/*.*", // git controlled model parameters these are usually our zSYS Maintenance cubes data dumps // which consist of generic parameters which we want to transmit between models "GitControlledConfigs/*.*", // our custom “manually” maintained dimension CSV-s which maintained by business and we want // to track changes between environments "GitControlledDimensionCSV/*.*" ] }
// our default tasks implemented by custom TI processes "Tasks": { "Backup": { "Process": "Processes('zSYS Backup')", "Parameters": [ { "Name": "pWait", "Value": "1"} ] }, // this is handy to drop all rules before master data changes come // for example an element removed which had previously a reference so during git migration we can detach all rule and // apply the master data changes and then the git flow will deploy the new rules. "PrePullDropRules": { "Process": "Processes('zSYS Maintenance Clear All Cube Rule')" }, // update all manually maintained and git tracked dimensions from the above mentioned folder "PostPullUpdateAllGitControlledDimFromCSV": { "Process": "Processes('zSYS Maintenance Dimension Import All from CSV')" }, // if we need any maintenance change to do after deployment like delete an object which // excluded from a model, or change a system parameter, or run an ETL process , etc, we are // using a naming convention postpull_SOMETHING and then this process will execute those // processes in abc order. "PostPullRunAllPostPullProcess": { "Process": "Processes('zSYS Maintenance Run All PostPull Prefix Named Process')" }, // handy cleanup task to remove unused subsets and views "GarbageCleanUp": { "Process": "Processes('zSYS Maintenance View and Subset Cleanup')", "Parameters": [ { "Name": "pRun", "Value": "1"} ] } }
// this part the generic NOT system environment specific task execution definitions "PrePull": [], "PostPull": [ "Tasks('PostPullUpdateAllGitControlledDimFromCSV')", "Tasks('GarbageCleanUp')" ], "PrePush": [ "Tasks('GarbageCleanUp')" ]
// generic TM1 object exclusion / inclusion list "Ignore": [ "Cubes/Views", // special bedrock alternatives :) by default } are ignored but these we would like to track "!Processes('}bedrock.cube.data.export.ks')", "!Processes('}bedrock.cube.data.import.ks')", "!Processes('}bedrock.hier.export.ks')", "!Processes('}bedrock.hier.import.ks')", // element attributes rules which we want to track need to include the element attribute cubes "!Cubes('}ElementAttributes_Employee')", "!Cubes('}ElementAttributes_Organization Units')", "!Cubes('}ElementAttributes_Profitability Segments')", "!Cubes('}ElementAttributes_Versions')", "!Cubes('}ElementAttributes_Organization Units')", "!Cubes('}ElementAttributes_Simulation Case')", // picklists to track "!Cubes('}PickList_Employee Settings')", "!Cubes('}PickList_Headcount')", // example to ignore all } subset in all hierarchy "Dimensions/Hierarchies/Subsets('}*')", // main ETL pipeline handled DWH master data dimensions or environment specific dimensions which cannot be deployed "Dimensions('Business Partners')", "Dimensions('Business Partner Dummies')", "Dimensions('Cost Objects')", "Dimensions('Curve Types')", "Dimensions('Employee')", "Dimensions('Key Account Managers')", "Dimensions('Organization Units')", "Dimensions('Projects')", "Dimensions('Profitability Segments')", "Dimensions('Scenarios')", "Dimensions('Simulation Case')", "Dimensions('Versions')", "Dimensions('zSYS Analogic UserPool')", "Dimensions('zSYS Analogic System Messages')" ], // environment specific overrides "Deployment": { "dev": { "PrePull": [ "Tasks('Backup')" ] }, "preprod": { "PrePull": [ "Tasks('Backup')" ] }, "prod": { "PrePull": [ "Tasks('Backup')" ] } }
Once we have created a project json file that fits our model, we are ready to automate our TM1 CI/CD process, which is the topic of our next article.