Decisions in all contexts of life are taken under the best information we can collect and under the assumption of many factors. Changes to the surrounding context could lead to changing assumptions and then, it is time to question and rethink about our overall strategy.
Software architecture and design decisions are taken based on principles as the following ones:
- Ease of use
- Separation of Concerns
- Low coupling
- High cohesion
These principles lead to methods and low level techniques in the process of software creation. And some of them are based on assumptions about the cost of producing code, debugging and maintaining it by capable humans, also called, developers.
Every language, tools and framework comes with its own boilerplate code, ceremony and plumbing. By plumbing, I mean, the code you need to add to glue the components with the framework and/or infrastructure. Plumbing code uses to be a real pain for developers: provides no added value to the business, uses to be tedious to write, it is boring and it is a source of bugs. Most programmers use to hate repeating such plumbing code pushing them in the background as mere typists.
My hypothesis here is that: many development choices we take rely on the cost of changing code. And many developers take such decisions instantaneously like following a mantra in their “agile” religion without stopping and thinking it twice in many cases.
But, what happens if such cost is reduced in a factor of 10. Could this lead to rethinking our design choices and approach the problem with new eyes?
Now, I need you to consider the possibility that we are doing MDE (Model Driven Engineering) we have a code generator: a good one, capable of improving your productivity as developer in a factor of 1 to 10 in a given domain. Of course if such code generator would ever exists, that would be a disruption technology, isn’t it? May be you don’t believe in code gen or have had already bad experiences in the past with it, but please, open your mind and just consider the possibility before reading:
With these kind of tools under your belt and in your mind, let’s now review how things would change:
Code generators are feed by metadata, or models, or specifications. Choose your preferred name, the semantics remain. At the end: a model or a spec is a piece of information in a certain level of abstraction useful to reason about a system.
In this context, if the generator is going to improve your productivity in a factor of 10, makes sense to dedicate time to carefully build the model (analysis and design) and less time to real coding.
The model will be reflecting more and more the essence of the business. The code can be discarded and regenerated whenever is needed, making it easy to move to a different technology.
Therefore, the value is more and more in the model, the code becomes discardable more and more as you increase the percentage of generated code. In the same way as people started to code in C instead of maintaining assembler code, once people trust in the compiler.
In this context of work, Forward Engineering would be mainstream and the way to go. Building and maintaining models and doing code generation for the 80% of the code and then adding the 20% missing.
It is makes no sense, in terms of cost and feasibility to look for inexistent reverse engineering tools to keep in sync models with code. Code is warranted to be in sync with models if this code is only generated by the generator and never touched by a human developer.
The goal is not to replace or avoid developers, but to relieve them to do the boring things (the 80%) and put their brains focused on the real quest: the non-trivial 20% missing.
Don’t touch generated code. Delimitation of user code and generated code should be strict and the punishment to rewrite and refactor their code must be enforced to those who dare to break this rule.
Designer choices frequently involve taking decision on:
- Repositories: concrete, generic, no repositories at all?
- Data Transfer Objects (DTO): one for all, many (one per use case), none
- POJOs/POCOs vs full entity-logic objects
- Anemic models?
- DAO / ORM – mapping headache
- Fluent APIs or XML configuration files
- Inversion of Control frameworks
And once again, these choices are resolved in the traditional way, taking in serious consideration maintainability issues and ease of change because more likely the software will change with the business.
But now, with our new eyes, let’s remove the assumption that this code is no more a problem to be changed. Moreover, the cost of changing one of these decisions is near to 0, or at least smaller that changing code in the traditional way: just change a switch in the code generator strategy and regenerate everything again.
I will put you an example: ORM mapping files (a la Hibernate/NHibernate) uses to be real nightmare especially in XML files when facing a medium/big system with 500 tables. Writing XML files is tedious, error prone and a real developer torture. And in this context makes totally sense to use a fluent API approach, convention over configuration approaches and any techniques that helps to alleviate this painful and repetitive task.
However, if you use a code generator, (and I do, no science fiction here) able to allow you select the target ORM, and write the correct mapping files in the 100% of the cases, then, in this new scenario: XML is no more a pain. I can occasionally open an mapping XML file or a fluent API version and check that it is correct as long as I do not have to rewrite it.
And that’s basically what I want to stress: design choices in the software industries are strongly influenced by the way and the tools we use to create the software assets. If we change the tools and the costs of change, we should start rethinking if there is a better way of resolving other principles. For example, preferring ease of change in the model instead of ease of change in the code.
Enabling Evidence based Software Engineering:
Once you take the red pill, there is no way back. When facing each new problem, you can prototype the system, and instead of imagining what would be the best performance architecture for your system. You can build a Proof of Concept in two or three candidate architectures, create a benchmark, test and measure it with realistic stress load test. And then, only after comparing the measurements, take a decision about the best architecture for this system.
This method of doing things changes the quality and the way we create software, turning the artisan process into a more scientific driven by data based decisions.
Some of you are ready to accept and embrace it, others would not. For sure my point of view in the issue could be strongly biased, but after of working in conceptual modeling and code generation for more than 14 years, my feeling is that technology for modeling and code generation has matured a lot in latest years and the disruption point is approaching faster and faster. Will you jump into this fast train?
What do you think?
PD: For the ones that read till here but still do not belief in code gen yet:
Would you like to try a real and modern code generator to see its possibilities?
Take a look to Radarc and share your impressions.
Radarc is done by developers for developers.