Posts categorized “Architecture”.

Microservices, MEAN Stack, & Docker

In May I have been talking about Microservices and the MEAN Stack in Madrid (JSDay.ES), Málaga (OpenSouthCode) and Seville (SevillaJS).
Here you have the recording for the first one (in Spanish).

Later on, with LemonCode we broadcast a webminar about Docker and Docker-Compose (in Spanish).

Finally, with Bruno Capuano a Podcast with Introduction to Microservices (in Spanish).

Dissecting an AppNow Specimen

Artisan Spanish Knife
AppNow is a minimalist service that converts simple models in a cloud deployed back-end.
The simplicity of this approach encourages developers to focus on what to build (business needs) instead of how (the technical skills to build it) and where to deploy it (devOps).

In this article, we are going to focus and delve into the technical choices made for the backend: architecture, languages & tools, organization and code practices used.

In particular, this can be a useful read for a developer wanting to introduce her/himself into MEAN development. On the other hand, the text is full of low-level technical details and is not suitable for those unfamiliar with programming.
You have been been warned.

More… »

Publishing OpenData with AppNow

OpenData

OpenData logo by @fontanon

Open Data is a philosophy to make data public & accessible on the internet. Governmental data, expenditures and investments, economic indicators, resources exploit, anonymous medical information, weather information, genomics, or universe exploration to cite a few. I.e. Data.gov & Data.gov.uk are good sources of governmental data from US & UK government respectively.

This movement is a prerequisite for other ones like:

  • Open Government to add transparency to the work of our politicians or
  • Open Science where science facts and research papers are freely shared on the net for anyone to conduct a further research today or in the future on the current knowledge. That what Newton referred as “If I have seen further it is by standing on the shoulders of giants” and it is one of the best way to boost the human knowledge and scientific progress.

In these days we are living, one can argue we are surrounded by tons of data, and more and more our capacity to process it is rapidly saturated and filtering mechanism are more needed than ever to reduce the noise over the signal.

On other hand, not all the relevant data is published. Not all is in digital form, not all is available for others. Providing fast and cheap publication mechanism can help to spread openData with valuable one.

What we can do as Developers?

More… »

Microservices Standardization from Models

Alhambra Tiles by Roberto Verturini

Microservices is emerging as an architectural pattern. They encourage small, autonomous and decoupled services exposing a stable contract implementing a business service with the minimum set of external dependencies.

For further discussions on Microservices, accurate definitions and its use cases read Lewis & Fowler, Richardson, or Crammon.

With no doubt, Microservices are gaining momentum. A lot, and too fast, as usual in the tech industry. So fast, that microservices abuse (just because has its buzz component and coolness factor) is already causing also problems:
More… »

Rethinking development design choices with MDE

The thinker, Paris. Photo CC by Dano

Decisions in all contexts of life are taken under the best information we can collect and under the assumption of many factors. Changes to the surrounding context could lead to changing assumptions and then, it is time to question and rethink about our overall strategy.

Software architecture and design decisions are taken based on principles as the following ones:

  • Simplicity
  • Ease of use
  • Maintainability
  • Separation of Concerns
  • Understandable
  • Low coupling
  • High cohesion
  • Robustness
  • Performance
  • Speed

These principles lead to methods and low level techniques in the process of software creation. And some of them are based on assumptions about the cost of producing code, debugging and maintaining it by capable humans, also called, developers.

Every language, tools and framework comes with its own boilerplate code, ceremony and plumbing. By plumbing, I mean, the code you need to add to glue the components with the framework and/or infrastructure. Plumbing code uses to be a real pain for developers: provides no added value to the business, uses to be tedious to write, it is boring and it is a source of bugs. Most programmers use to hate repeating such plumbing code pushing them in the background as mere typists.

My hypothesis here is that: many development choices we take rely on the cost of changing code. And many developers take such decisions instantaneously like following a mantra in their “agile” religion without stopping and thinking it twice in many cases.

But, what happens if such cost is reduced in a factor of 10. Could this lead to rethinking our design choices and approach the problem with new eyes?

Now, I need you to consider the possibility that we are doing MDE (Model Driven Engineering) we have a code generator: a good one, capable of improving your productivity as developer in a factor of 1 to 10 in a given domain. Of course if such code generator would ever exists, that would be a disruption technology, isn’t it? May be you don’t believe in code gen or have had already bad experiences in the past with it, but please, open your mind and just consider the possibility before reading:

With these kind of tools under your belt and in your mind, let’s now review how things would change:

Code generators are feed by metadata, or models, or specifications. Choose your preferred name, the semantics remain. At the end: a model or a spec is a piece of information in a certain level of abstraction useful to reason about a system.

Conclusion 1:
In this context, if the generator is going to improve your productivity in a factor of 10, makes sense to dedicate time to carefully build the model (analysis and design) and less time to real coding.

The model will be reflecting more and more the essence of the business. The code can be discarded and regenerated whenever is needed, making it easy to move to a different technology.

Conclusion 2:
Therefore, the value is more and more in the model, the code becomes discardable more and more as you increase the percentage of generated code. In the same way as people started to code in C instead of maintaining assembler code, once people trust in the compiler.

Conclusion 3:
In this context of work, Forward Engineering would be mainstream and the way to go. Building and maintaining models and doing code generation for the 80% of the code and then adding the 20% missing.
It is makes no sense, in terms of cost and feasibility to look for inexistent reverse engineering tools to keep in sync models with code. Code is warranted to be in sync with models if this code is only generated by the generator and never touched by a human developer.

The goal is not to replace or avoid developers, but to relieve them to do the boring things (the 80%) and put their brains focused on the real quest: the non-trivial 20% missing.

Conclusion 4:
Don’t touch generated code. Delimitation of user code and generated code should be strict and the punishment to rewrite and refactor their code must be enforced to those who dare to break this rule.

Design choices:

Designer choices frequently involve taking decision on:

  • Repositories: concrete, generic, no repositories at all?
  • Data Transfer Objects (DTO): one for all, many (one per use case), none
  • POJOs/POCOs vs full entity-logic objects
  • Anemic models?
  • DAO / ORM – mapping headache
  • Fluent APIs or XML configuration files
  • Inversion of Control frameworks

And once again, these choices are resolved in the traditional way, taking in serious consideration maintainability issues and ease of change because more likely the software will change with the business.

But now, with our new eyes, let’s remove the assumption that this code is no more a problem to be changed. Moreover, the cost of changing one of these decisions is near to 0, or at least smaller that changing code in the traditional way: just change a switch in the code generator strategy and regenerate everything again.

I will put you an example: ORM mapping files (a la Hibernate/NHibernate) uses to be real nightmare especially in XML files when facing a medium/big system with 500 tables. Writing XML files is tedious, error prone and a real developer torture. And in this context makes totally sense to use a fluent API approach, convention over configuration approaches and any techniques that helps to alleviate this painful and repetitive task.

However, if you use a code generator, (and I do, no science fiction here) able to allow you select the target ORM, and write the correct mapping files in the 100% of the cases, then, in this new scenario: XML is no more a pain. I can occasionally open an mapping XML file or a fluent API version and check that it is correct as long as I do not have to rewrite it.

And that’s basically what I want to stress: design choices in the software industries are strongly influenced by the way and the tools we use to create the software assets. If we change the tools and the costs of change, we should start rethinking if there is a better way of resolving other principles. For example, preferring ease of change in the model instead of ease of change in the code.

Enabling Evidence based Software Engineering:

Once you take the red pill, there is no way back. When facing each new problem, you can prototype the system, and instead of imagining what would be the best performance architecture for your system. You can build a Proof of Concept in two or three candidate architectures, create a benchmark, test and measure it with realistic stress load test. And then, only after comparing the measurements, take a decision about the best architecture for this system.

This method of doing things changes the quality and the way we create software, turning the artisan process into a more scientific driven by data based decisions.

Some of you are ready to accept and embrace it, others would not. For sure my point of view in the issue could be strongly biased, but after of working in conceptual modeling and code generation for more than 14 years, my feeling is that technology for modeling and code generation has matured a lot in latest years and the disruption point is approaching faster and faster. Will you jump into this fast train?

What do you think?

 

PD: For the ones that read till here but still do not belief in code gen yet:

Advertising ON:  

Would you like to try a real and modern code generator to see its possibilities?

Take a look to Radarc and share your impressions.

Radarc is done by developers for developers.

Advertising OFF:

 

Radarc 3.0 Released!

The arrival to my new job in Sevilla has coincided with the preparations and launch of a new product. We at Icinetic, are releasing Radarc 3.0. Radarc is a very easy to use code generator highly integrated with Visual Studio and targeting .NET technologies.

Radarc has the ability produce multiple architectures using the same base models and keeping in-sync generated artifacts when model element changes. Architectures and DSLs for defining the models are packaged in so called “Formulas”.

Currently, the following formulas are available for download and it is free for non-commercial usage:

  • ASP.NET Web Forms + Entity Framework
  • ASP.NET MVC 3.0 + Entity Framework
  • ASP.NET MVC 3.0 + Entity Framework + Azure Storage & deployment
  • Windows Phone 7
Radarc creates a complete prototyping application in seconds following the cycle: change the model, touch no line of code, build and run. Prototyping an application is a question of minutes, and obtain a first scaffolding of your application. Moreover, custom code can be inserted in specially designed locations that will be preserved in every regeneration lap.

Radarc 3.0 is available with three licensing models and its free for non-commercial usage.

Other technologies are available on demand, such as:

  • .NET 4.0 Domain Driven Design N-Layered Architecture
  • NHibernate & more to come…
Some cases of usage:
  • If you work in a .NET development shop, feel free to give it a try and give us some feedback.
  • On the other hand, if you want to start learning one of the previous technologies or architectures, you can use also Radarc to generate a reference sample application and start exploring the code.
  • If you are a experienced software architect and needs to evaluate SW architectures to benchmark them before choosing a winner arch for your project, think about the cheap possibility of generate the same application in two technologies and test how well performs for your specfic problem.
These days, I am learning a lot about the state of art here at Icinetic and I hope to start contributing to the bits very, very soon.
Bonus extra: a 20 minutes demo video (in Spanish) generating three architectures is available.
Next week we will be attending Code Generation 2012. If you are interested, join us and see a live demo or download it and give it a try!

Hello World with Essential, the video

Essential Logo

The Hello World sample is a nice starting point to show the syntax and capabilities of every new language. This test is also useful for code generators and Domain Specific Languages (DSLs) also as a proof of concept.

Following this honorable tradition, I have created a video showing the capabilities of Essential: the tool I am working on for doing agile Model Driven Development.

In this 10 minutes video you will get a general idea of the DSL the language provides to create:

  • metamodels
  • models
  • templates
  • and control transformations

In order to see the details, jump to Vimeo, activate the High Definition mode (HD) and set full screen (sorry embebed version is not good enough).

Essential IDE – Hello World sample from Pedro J. Molina on Vimeo.

More info about it and 12 usage scenarios in the last Code Generation 2010 presentation about Tailored Code Generators.

Share your impressions!

Presenting on Semana Informatica 2010

Enlace a semanainformatica.com

On April 27, my colleague Nicolas Cornaglia and I will be presenting a talk with some live demos in Valencia, Spain representing our company Capgemini, in the scope of the event Semana Informatica 2010.

The title of the talk will be: Productivity through frameworks and MDD.

The session will be delivered in Spanish. Full agenda (PDF version) and session details.

Abstract:

Business applications for Enterprise Software usually follows a fixed set of standards (global or in-house) to help in keeping the maintenance cost as lower as possible (reducing TCO). In this context, homogeneity and regulation compliance is frequently a must.

The main issue in our presentation will be to show how an approach based in a good framework, modeling tools and code generation techniques can be the right tools to achieve a high degree of standarization, quality, productivity and flexibility to evolve the Enterprise Architecture. Such flexibility is key to provide a better Time to Market when a business process change or a technical requirement suddenly emerges.

Balancing Variability & Commonality

When creating a DSL (Domain Specific Language) one of the most important choices is to decide about what items in your domain are going to be considered variable, changeable and which ones are going to be considered fixed, carved in stone.

The former need to be specified in your DSL, in your design or may be coded. The latter are considered immutable and will remain static for all your derived applications for ages.

Considering that everything is static it is obviously useless. On the contrary, considering every aspect variable drives to another no-end getting nothing tangible again as a result. Therefore, in the middle we will have to search for the virtue.

The main issue here is to study a domain and ponder between variable parts and fixed parts. It is not a trivial thing to do from the very beginning. Experience in DSL construction and specially, experience in the domain helps to train your smell, but there are not clear rules for it, nevertheless.

It is not only about knowing your requirements. It is about trying to predict how your requirements will change across time and what types of requirements have more likelihood and tendency to change.

Adding variability

A variable part could be, for example, the background color of your application. If so, you need to add syntax and semantics to your DSL to capture such property. Let’s say you can express somewhere in your specification:  { background-color = peach; }

We can select the peach color for app1, and may be ivory for app2.

However, nothing is for free and this freedom comes with the followings possible drawbacks:

  • You need to increase the size of your language (DSL), editors, model checkers, compilers and code generation or interpreters.
  • Users have to provide a value for such property unless you have also provided a sensible default value in case of missing information.
  • Homogeneity across applications vanishes with respect to background-color. Now it’s a user choice (the one in control of the modeling tool).
  • Specs are more complex.

Adding commonality

On the other hand, if you consider the background of your application should be always the same because you are following, for example, a user interface style guide then, the background color is a fixed issue. Its value is provided by design by a style guide, by an architect, or design choice and the user modeling has no control over it.

In this scenario, the DSL is smaller. No need to specify the background color, it is implicit, it is no included in the model/specification.

With this kind of choice, we are betting for standardization. A shared library, a runtime framework or an interpreter will take care of supplying the right color in the right moment.

  • Users can not change the background color, specs are smaller.
  • Standardization is improved across applications.
  • User has no control on the feature.

But, what is the right choice?

It depends. There is no right choice with the information given till the moment. To answer the question we need to consider if the background color is a fundamental feature in our domain and it is needed to be different from application to application or may be, on the contrary, the color should be used in an homogeneous way following a predefined style guide.

Again, the domain imposes the rules to follow. Studding the domain and its variability is crucial to create a consistent DSLs focused in gathering the key features of the domain in a model: the important and variable ones. The important and fixed ones must be also identified but they shouldn’t be included into the model, but into the framework or the runtime.

Standards, policy assurance, compliance

Everything related to standard procedures, compliance and in-house stile guidelines are first-class candidates for standardization. If done in that way, your developers will not have to remember all that weird standard and compliance rules when developing a specific artifact.

A code generator will provide the right value for them. It will do it silently, without errors neither oversights. All the boring code dedicated to plumbing applications like: naming guidelines, service publication, serialization, persistence, adapters, proxies, skeletons, stubs, DAO code are driven by strict standards and best practices and are natural candidates for strong automation by code generators.

Moreover, if the regulation or the standard changes, the change will have impact in the following assets:

  • a single change to a framework will be enough
  • or a change to a code generator and then forcing a regeneration process and redeploy.

In both cases, it is cheaper that manually reviewing a set of in-production applications.

For example, think about replacing your data-layer access code from a DAO pattern and SQL to an ORM based approach like Hibernate.

Business Know-How

The core of the business Know-How is the important and the variable parts we are interested in to be collected in a specification. Such features need to be modeled, and if possible, abstracted from the technology that will implement it.

If we do it in this way, the model can survive the current technology.

Why we could be interested in do it in such a way?

Just because technology evolves like fashion. Today everyone likes red T-shirts, tomorrow blue jeans will be sublime! Basic, Cobol, C, Java, C#, Ruby… what is the next language to use in 5 years time?

Use your best bet, whatever platform better fulfills your requirements, but I it could be nice to see the business process surviving the technology. 😉  We don’t know in which direction, but technology will evolve, and will change for sure.

Maintaining a language or a DSL

When a DSL or a language needs a review you will be probably considering adding new features to the language.

Each new feature will increase the variability and increase the complexity of the language. Before deciding to add a new modeling feature make a cost/benefits analysis and double check that the valued added by the improvement is greater than the cost of implementing it.

I like to follow the golden rule proposed by Gordon S. Novak about automatic programming:

“Automatic Programming is defined as the synthesis of a program from a specification. If automatic programming is to be useful, the specification must be smaller and easier to write than the program would be if written in a conventional programming language.”

Conclusion

Whenever is possible:

  • Business Know-How should be captured by models, specs, DSLs.
  • Technical Know-How should be captured by code generators, model interpreters, best practices and patterns.

So, at the end of the day I like the following pair of quotes to sum up about what to include in a model:

  • The Spanish writer Baltasar Gracián in the XVII century said “Lo bueno si breve, dos veces bueno.” (a literal translation from Spanish could be: “Good things if brief, twice good.”)
  • On the other side, Albert Einstein (XX century) counterpoints “Things should be as simple as possible, but not simpler.”

Countdown for CG2010

The Programme for Code Generation 2010 has been published.

This year Mark has invited me to give an introductory session to Model Driven Software Development (MDSD) oriented to begginers.

Also, I will discuss in a second session about creating tailored code generators.

See you in Cambridge in June!