Root CUIP Metalevel

Posts by pjmolina.

Language Workbench Competition 2011

Language Workbenches, as defined originally by Martin Fowler, are tools aiming to cope with DSL creation and code generation to increase the level of abstraction of software development.

Currently, the main efforts on MDD, MDE, MDSD (model-driven-whatever you prefer…) are focused in the development of this kind of tools perceived as a hot research area for Software Engineering.

In this scenarion, Cambridge, at Code Generation 2010 was the perfect place for sparkling the idea of promoting a contest to show and compare the advances of different language workbenches.

The Language Workbench Competition born with the objective to serve as a point of comparison between different tools in this exciting and fast moving area.

The competition is now open to the public. So anyone interested can enroll and implement the proposed challenge just published.

On the other hand, if you want to know more about Language Workbenches, modeling and code generation add this page to you bookmarks and come back in few months to see some proposals.

The promoters of the idea are: Markus Völter, Eelco Visser, Steven Kelly, Angelo Hulshout, Jos Warmer, Bernhard Merkle, Karsten Thoms and myself.

So this a call to arms but with sportsmanship!

Angelo and Markus has already started the calling.

Tailored Code Generators at CG2010

I presented the following talk: DSL and tool support for Tailored Code Generators at Code Generation 2010, at Cambridge, UK on June 18th.

It also was the public presentation of Essential: the tooling supporting my approach for applying MDD. I got a very good feedback from the audience and receive many request to test the tool.

People interested in beta testing it can still enroll here.

Introducing MDSD

My yesterday talk slides in Code Generation 2010 about Introducing Model Driven Software Development:

Essential drop

Essential is going to be presented this week in Code Generation 2010 during the session DSL and tool support for building tailored code generators.

To celebrate this milestone and give the chance to have more people trying it, an early version is going to be released for the people interested in.

If this is your case, please enroll yourself using the evaluation request form.

Nature by Numbers

Today I want to share an outstanding video found by my colleague Nico.

This kind of material always shock and amaze me!

When I was a child, I imagine how multimedia contents can effectively be more educational than just using the boring traditional books. I remember myself playing with animated GIFs to show the cyclic nature of the glucose and later on playing with Powerpoint, Flash, etc. to try to explain complex things visually. I prefer a good picture than a thousand of words.

Thereby, when I see a video like the next one I need to see it two or three more times till be able to close the mouth and that only happens just after satisfying my curiosity and gathering the full details. Math, nature and a piece of art, all in one.

Now enjoy it and turn on the full screen mode!

Nature by Numbers from Cristóbal Vila on Vimeo.

The three principles explained in the video:

Intro and the making-of.

After seen the video, and coming back from the off-topic, isn’t beauty to dream about that, may be, Nature is really model-driven… and actually has a complex and hidden metamodel governing it all?

All credits to Cristóbal Vila, Etérea Studios and his great videos.

¡Que bueno! ¡Maño! Me quito el sobrero.


Additional Model Driven bonus: Reviewing the making-of I found two visual models (DSL) (this and this using XPresso) describing algorithms in a visual form driving the animation in two scenes. Wow!

Presenting on Semana Informatica 2010

Enlace a semanainformatica.com

On April 27, my colleague Nicolas Cornaglia and I will be presenting a talk with some live demos in Valencia, Spain representing our company Capgemini, in the scope of the event Semana Informatica 2010.

The title of the talk will be: Productivity through frameworks and MDD.

The session will be delivered in Spanish. Full agenda (PDF version) and session details.

Abstract:

Business applications for Enterprise Software usually follows a fixed set of standards (global or in-house) to help in keeping the maintenance cost as lower as possible (reducing TCO). In this context, homogeneity and regulation compliance is frequently a must.

The main issue in our presentation will be to show how an approach based in a good framework, modeling tools and code generation techniques can be the right tools to achieve a high degree of standarization, quality, productivity and flexibility to evolve the Enterprise Architecture. Such flexibility is key to provide a better Time to Market when a business process change or a technical requirement suddenly emerges.

Balancing Variability & Commonality

When creating a DSL (Domain Specific Language) one of the most important choices is to decide about what items in your domain are going to be considered variable, changeable and which ones are going to be considered fixed, carved in stone.

The former need to be specified in your DSL, in your design or may be coded. The latter are considered immutable and will remain static for all your derived applications for ages.

Considering that everything is static it is obviously useless. On the contrary, considering every aspect variable drives to another no-end getting nothing tangible again as a result. Therefore, in the middle we will have to search for the virtue.

The main issue here is to study a domain and ponder between variable parts and fixed parts. It is not a trivial thing to do from the very beginning. Experience in DSL construction and specially, experience in the domain helps to train your smell, but there are not clear rules for it, nevertheless.

It is not only about knowing your requirements. It is about trying to predict how your requirements will change across time and what types of requirements have more likelihood and tendency to change.

Adding variability

A variable part could be, for example, the background color of your application. If so, you need to add syntax and semantics to your DSL to capture such property. Let’s say you can express somewhere in your specification:  { background-color = peach; }

We can select the peach color for app1, and may be ivory for app2.

However, nothing is for free and this freedom comes with the followings possible drawbacks:

  • You need to increase the size of your language (DSL), editors, model checkers, compilers and code generation or interpreters.
  • Users have to provide a value for such property unless you have also provided a sensible default value in case of missing information.
  • Homogeneity across applications vanishes with respect to background-color. Now it’s a user choice (the one in control of the modeling tool).
  • Specs are more complex.

Adding commonality

On the other hand, if you consider the background of your application should be always the same because you are following, for example, a user interface style guide then, the background color is a fixed issue. Its value is provided by design by a style guide, by an architect, or design choice and the user modeling has no control over it.

In this scenario, the DSL is smaller. No need to specify the background color, it is implicit, it is no included in the model/specification.

With this kind of choice, we are betting for standardization. A shared library, a runtime framework or an interpreter will take care of supplying the right color in the right moment.

  • Users can not change the background color, specs are smaller.
  • Standardization is improved across applications.
  • User has no control on the feature.

But, what is the right choice?

It depends. There is no right choice with the information given till the moment. To answer the question we need to consider if the background color is a fundamental feature in our domain and it is needed to be different from application to application or may be, on the contrary, the color should be used in an homogeneous way following a predefined style guide.

Again, the domain imposes the rules to follow. Studding the domain and its variability is crucial to create a consistent DSLs focused in gathering the key features of the domain in a model: the important and variable ones. The important and fixed ones must be also identified but they shouldn’t be included into the model, but into the framework or the runtime.

Standards, policy assurance, compliance

Everything related to standard procedures, compliance and in-house stile guidelines are first-class candidates for standardization. If done in that way, your developers will not have to remember all that weird standard and compliance rules when developing a specific artifact.

A code generator will provide the right value for them. It will do it silently, without errors neither oversights. All the boring code dedicated to plumbing applications like: naming guidelines, service publication, serialization, persistence, adapters, proxies, skeletons, stubs, DAO code are driven by strict standards and best practices and are natural candidates for strong automation by code generators.

Moreover, if the regulation or the standard changes, the change will have impact in the following assets:

  • a single change to a framework will be enough
  • or a change to a code generator and then forcing a regeneration process and redeploy.

In both cases, it is cheaper that manually reviewing a set of in-production applications.

For example, think about replacing your data-layer access code from a DAO pattern and SQL to an ORM based approach like Hibernate.

Business Know-How

The core of the business Know-How is the important and the variable parts we are interested in to be collected in a specification. Such features need to be modeled, and if possible, abstracted from the technology that will implement it.

If we do it in this way, the model can survive the current technology.

Why we could be interested in do it in such a way?

Just because technology evolves like fashion. Today everyone likes red T-shirts, tomorrow blue jeans will be sublime! Basic, Cobol, C, Java, C#, Ruby… what is the next language to use in 5 years time?

Use your best bet, whatever platform better fulfills your requirements, but I it could be nice to see the business process surviving the technology. ;)  We don’t know in which direction, but technology will evolve, and will change for sure.

Maintaining a language or a DSL

When a DSL or a language needs a review you will be probably considering adding new features to the language.

Each new feature will increase the variability and increase the complexity of the language. Before deciding to add a new modeling feature make a cost/benefits analysis and double check that the valued added by the improvement is greater than the cost of implementing it.

I like to follow the golden rule proposed by Gordon S. Novak about automatic programming:

“Automatic Programming is defined as the synthesis of a program from a specification. If automatic programming is to be useful, the specification must be smaller and easier to write than the program would be if written in a conventional programming language.”

Conclusion

Whenever is possible:

  • Business Know-How should be captured by models, specs, DSLs.
  • Technical Know-How should be captured by code generators, model interpreters, best practices and patterns.

So, at the end of the day I like the following pair of quotes to sum up about what to include in a model:

  • The Spanish writer Baltasar Gracián in the XVII century said “Lo bueno si breve, dos veces bueno.” (a literal translation from Spanish could be: “Good things if brief, twice good.”)
  • On the other side, Albert Einstein (XX century) counterpoints “Things should be as simple as possible, but not simpler.”

Countdown for CG2010

The Programme for Code Generation 2010 has been published.

This year Mark has invited me to give an introductory session to Model Driven Software Development (MDSD) oriented to begginers.

Also, I will discuss in a second session about creating tailored code generators.

See you in Cambridge in June!

Metalevels & Meta-metalevels

It seems like a tongue twister, and sometimes it is. Modeling and metamodeling is always a topic subject to high probability of misinterpretation. At the end, the concepts involved has subtle differences, but talking about then in different levels depending on the properties we want to stress.

My friend Peter Bell posted some time ago, a nice introductory article to models, metamodels and meta-metamodels.

If you have some basic background on databases, the examples Peter provides, will be useful to understand it all.

On the contrary, I usually preffer to explain it starting from the top and then going downto the hill. But I have to recognize than the reverse (as explained by Peter) is probably easy to follow for newcomers.

Now my version:

Take the following concepts: entity, attribute and relation. These are more than enough to create your basic meta-meta-model. With this primitive concepts you can built everything from scratch in every model!

  • OMG / MOF call this level M3.
  • When instantiating these M3 concepts, you can build M2 models and create meta-models for UML class diagrams, or state-machines. For example, UML class can be described as an entity with an attribute called Name, etc.
  • Instantiating M2, it allows you to create M1 models: tipically your business problem to deal with invoices (class Invoice) and customers (class Customer) and its corresponding association relationships.
  • Finally, when instantiating M1 models, you are finally creating living objects in an M0 world (let’s call it: The Reality). For example, customer=ACME and invoice=INV0003 are living objects at your aplication run-time.

Funny and weird, isn’t it. It’s like building aztec pyramids but with a top-down approach.

This layered approach to modeling based in abstraction and instanciation are crucial to undestand MOF, or any meta-modeling tool you ever use.

  • M3 (meta-meta-models) used to be hard-wired in metamodeling tools like MetaEdit, EMT, or MS DSL Tools/Corona.
  • M2 defines the rules for modeling (meta-modeling). A typical metamodel is the UML metamodel hard-wired in each UML tool you use. If your are changing an M1 model, you are creating a new language, in a literal sense. Sample the UML class concept.
  • M1 are each of the instances or models you create with tools like UML. Sample: a class named Customer.
  • An M0 sample would be the object ACME persisted as a row in a table and representing a customer in runtime in one particular software, for example.

Frequently people ask: why we stop at M3? It’s is not possible to have M4? An upper model to M3?

Well, the easy answer to this is No.

However the long response is: Can you find a more simple primitive concepts of entity, attribute & relation in a more abstract way and still be capable to derive/express the same concepts? If this is possible and convenient for your domain, then you just have invented your M4 model.

At the end, you start defining some primitive concepts like axioms: they can not be simplified or divided in simpler parts and ones axioms can not deriver others.

Note that you can use as many levels as needed, but you need a root level containing the axioms to start from and a M0 to set an arbitrary reference for the reality. To my preferences, I would have started to name it in the reverse order, M0 for the axioms, and M3 for reality.

The core metamodeling concepts in MOF, EMF, Meta-Edit and MS models are basically the same if you take a look: entities or core-classes, attributes or properties and relations.

From these primitives, it is easy and convenient to build any syntactic construction needed for the lower levels.

On the contrary, how to incorporate the emerging semantics in each new level is still a topic for strong discussion. But this is another interesting open topic for another post…

Long life to meta-meta-modeling!

MDD talk at UCLM/ABC

To give a talk about my favourite topic, it’s always a pleasure. To give it at home, it’s an invaluable pleasure.

I’ve been invited to talk about MDD in my native university (University of Castilla-La ManchaUCLM at Albacete, Spain) where I started my Computer Science studies. So, I have a lot of good feelings remembering my student days every time I come back there.

On Monday, February 22th, I will be there sharing my points of view and experiences about applying MDD.

Update:

When & where:  February 22th, 18:00 H. Salon de actos de la Escuela Superior de Ingeniería Informática, Albacete (location).