Root CUIP Metalevel

Latest posts.

End User Programming for Mobile Apps

Mobile Apps
Jean Bézivin (@jbezivin) (see slide 4 here) has a great slide about how the number of world-wide professional developers grows linearly respect to time, and at the same time, the need for software grows exponentially.

Mobile application is a specific market where the explosion for new software are more demanding than never before.

The obvious conclusion, we as developers are not going to be enough to cope with the demand without improving our productivity in orders of magnitude and/or lowering the entry barrier for non-programmers to create applications.

You know, probably it is me as an MDD/MDE practitioner could be biased, but to my best knowledge, I strongly think that Model Driven Engineering is in an excellent position to play crucial role in both worlds:

  • For professional developers: to provide tools to provide such productivity improvement.
  • For end users: to provide tools as easy as possible to allow non-programmers to solve their common tasks.

 

Today, let’s talk a bit more about end users: why an end user would like to develop an App?

  • Because is something I want, or I like.
  • Because is cool.
  • Because is easy (should be easy, let’s say doable for the average person).
  • Because is cheaper than asking someone else to develop it for me.

 

Personal Apps

A Personal App is an application tailored to a specific customer and needs. Maybe it will have only one user (as a RSS and content aggregator for day to day consume like videos, music or posts) or be shared among friends (like a birthday application or a shared travel experience). Company apps, catalog of products, specific content about music, TV or films are high popular on markets.

 

Windows Phone App Studio

In this context, the tool Windows Phone App Studio released two days ago by Microsoft is a great step forward in this direction enabling final users to create their own apps: Personal Apps.

With-in a three or four step wizard, the user is able to select a template for the application, define, change or refine the contents, and select the layout and colors. No technical questions, no programming skills, just the minimum set of questions required: less is more in this contexts. In 5 minutes the work is done and the application ready to be deployed to the device.

See this 5 minutes video:  Windows Phone App Studio Introduction

Windows Phone App Studio provides the ability to generate, compile and deploy the app directly to the user phone with a download initiated by a QR/BIDI code, or to download the source code for a developer to extend, change and customize the app.

 

Native code

Many app builders focus on HML5, JavaScript and tools like PhoneGap/Cordova. This approach has advantages in terms that provides a common JS core code to do cross-platform to many devices at the cost of non-taking the advantages of the native development: best performance, take advantage of the device unique features and APIs, and better battery management. For the problems of JS on mobile development see this great article/essay: Why mobile web apps are slow.

If you are a developer, you will be productive with the tools you know, of course. But, definitely if you can afford code generation do it: a native app will excel its JS counterpart.

 

Quality by Design

Another great key feature of a code generator for end users apps is about hiding a huge pile of technical details. Very huge, trust me. If you want to develop from scratch a new mobile application for a phone and put it in a market, you should consider many, many factors:

  • Compliance with the UX, style guide of the device
  • Margins
  • Colors
  • Behaviors of the application
  • APIs for accessing all the features
  • Things allowed in the market, things that are not allowed
  • Etc.

For our own experience in Radarc, passing the Windows Phone Store Certification to create our first App took us 5 iterations of one week each. Now, a well-fine-tuned code generation can produce the code right on the first run saving a lot of time, and therefore, saving money. Of course, there are other constraints like the content to be adequate and not offensive, this is for sure a human task, but the generator solve the other 95% issues.

 

To sum up: Very, very happy with the team work so far. Congrats to all the involved in the creation and launching of Windows Phone App Studio.

End user programming is not a new term, but hey! new tools are over the table. Let’s see how people react and what they can build with it. Enjoy!

 

Rethinking development design choices with MDE

The thinker, Paris. Photo CC by Dano

Decisions in all contexts of life are taken under the best information we can collect and under the assumption of many factors. Changes to the surrounding context could lead to changing assumptions and then, it is time to question and rethink about our overall strategy.

Software architecture and design decisions are taken based on principles as the following ones:

  • Simplicity
  • Ease of use
  • Maintainability
  • Separation of Concerns
  • Understandable
  • Low coupling
  • High cohesion
  • Robustness
  • Performance
  • Speed

These principles lead to methods and low level techniques in the process of software creation. And some of them are based on assumptions about the cost of producing code, debugging and maintaining it by capable humans, also called, developers.

Every language, tools and framework comes with its own boilerplate code, ceremony and plumbing. By plumbing, I mean, the code you need to add to glue the components with the framework and/or infrastructure. Plumbing code uses to be a real pain for developers: provides no added value to the business, uses to be tedious to write, it is boring and it is a source of bugs. Most programmers use to hate repeating such plumbing code pushing them in the background as mere typists.

My hypothesis here is that: many development choices we take rely on the cost of changing code. And many developers take such decisions instantaneously like following a mantra in their “agile” religion without stopping and thinking it twice in many cases.

But, what happens if such cost is reduced in a factor of 10. Could this lead to rethinking our design choices and approach the problem with new eyes?

Now, I need you to consider the possibility that we are doing MDE (Model Driven Engineering) we have a code generator: a good one, capable of improving your productivity as developer in a factor of 1 to 10 in a given domain. Of course if such code generator would ever exists, that would be a disruption technology, isn’t it? May be you don’t believe in code gen or have had already bad experiences in the past with it, but please, open your mind and just consider the possibility before reading:

With these kind of tools under your belt and in your mind, let’s now review how things would change:

Code generators are feed by metadata, or models, or specifications. Choose your preferred name, the semantics remain. At the end: a model or a spec is a piece of information in a certain level of abstraction useful to reason about a system.

Conclusion 1:
In this context, if the generator is going to improve your productivity in a factor of 10, makes sense to dedicate time to carefully build the model (analysis and design) and less time to real coding.

The model will be reflecting more and more the essence of the business. The code can be discarded and regenerated whenever is needed, making it easy to move to a different technology.

Conclusion 2:
Therefore, the value is more and more in the model, the code becomes discardable more and more as you increase the percentage of generated code. In the same way as people started to code in C instead of maintaining assembler code, once people trust in the compiler.

Conclusion 3:
In this context of work, Forward Engineering would be mainstream and the way to go. Building and maintaining models and doing code generation for the 80% of the code and then adding the 20% missing.
It is makes no sense, in terms of cost and feasibility to look for inexistent reverse engineering tools to keep in sync models with code. Code is warranted to be in sync with models if this code is only generated by the generator and never touched by a human developer.

The goal is not to replace or avoid developers, but to relieve them to do the boring things (the 80%) and put their brains focused on the real quest: the non-trivial 20% missing.

Conclusion 4:
Don’t touch generated code. Delimitation of user code and generated code should be strict and the punishment to rewrite and refactor their code must be enforced to those who dare to break this rule.

Design choices:

Designer choices frequently involve taking decision on:

  • Repositories: concrete, generic, no repositories at all?
  • Data Transfer Objects (DTO): one for all, many (one per use case), none
  • POJOs/POCOs vs full entity-logic objects
  • Anemic models?
  • DAO / ORM – mapping headache
  • Fluent APIs or XML configuration files
  • Inversion of Control frameworks

And once again, these choices are resolved in the traditional way, taking in serious consideration maintainability issues and ease of change because more likely the software will change with the business.

But now, with our new eyes, let’s remove the assumption that this code is no more a problem to be changed. Moreover, the cost of changing one of these decisions is near to 0, or at least smaller that changing code in the traditional way: just change a switch in the code generator strategy and regenerate everything again.

I will put you an example: ORM mapping files (a la Hibernate/NHibernate) uses to be real nightmare especially in XML files when facing a medium/big system with 500 tables. Writing XML files is tedious, error prone and a real developer torture. And in this context makes totally sense to use a fluent API approach, convention over configuration approaches and any techniques that helps to alleviate this painful and repetitive task.

However, if you use a code generator, (and I do, no science fiction here) able to allow you select the target ORM, and write the correct mapping files in the 100% of the cases, then, in this new scenario: XML is no more a pain. I can occasionally open an mapping XML file or a fluent API version and check that it is correct as long as I do not have to rewrite it.

And that’s basically what I want to stress: design choices in the software industries are strongly influenced by the way and the tools we use to create the software assets. If we change the tools and the costs of change, we should start rethinking if there is a better way of resolving other principles. For example, preferring ease of change in the model instead of ease of change in the code.

Enabling Evidence based Software Engineering:

Once you take the red pill, there is no way back. When facing each new problem, you can prototype the system, and instead of imagining what would be the best performance architecture for your system. You can build a Proof of Concept in two or three candidate architectures, create a benchmark, test and measure it with realistic stress load test. And then, only after comparing the measurements, take a decision about the best architecture for this system.

This method of doing things changes the quality and the way we create software, turning the artisan process into a more scientific driven by data based decisions.

Some of you are ready to accept and embrace it, others would not. For sure my point of view in the issue could be strongly biased, but after of working in conceptual modeling and code generation for more than 14 years, my feeling is that technology for modeling and code generation has matured a lot in latest years and the disruption point is approaching faster and faster. Will you jump into this fast train?

What do you think?

 

PD: For the ones that read till here but still do not belief in code gen yet:

Advertising ON:  

Would you like to try a real and modern code generator to see its possibilities?

Take a look to Radarc and share your impressions.

Radarc is done by developers for developers.

Advertising OFF:

 

CG2012 Summary, day 3

CG2012 Summary, (part III)

This is the last post in my series of summaries of the Code Generation 2012 conference. After having reviewed the LWC day and days one and two, this third and last day concludes my personal report. Some of you will remember my intense tweeting in such days under the hashtag #cg2012, but I had a good reason: such tweets help me document the event and log the activities, which actually makes the process of writing these summaries possible. So, let’s stop meta-posting and dive into the content of the day D.

Friday 30th, day 3

The last day of the conference started with the keynote “Speed and Innovation through Architecture” by Jan Bosch (@JanBosch). This was very provocative in that it addressed MDSD and Software Architecture within the context of innovation in industrial contexts and presented it from a business perspective: opposite but complementary to the technical content that usually drives a CG session. Jan supported his talk with examples drawn from Nokia, Siemens, and other companies he is either currently working with or worked with prior to his current academic position.

Some of the pearls (argued by the audience in twitter latter) in his talk were:

  • The market is moving from products to services
  • The need for the speed: R&D departments should have very short innovation cycles. Speed is the primary feature to achieve. (Time to Market)
  • Moving faster: no efficiency improvement will outperform cycle time reduction.
  • Traditional SW development does not scale. We need to change the way we create SW.
  • Amazon releases new software every 11 seconds…
  • R&D as an Experiment System
  • Decisions should be based on DATA, not opinions
  • Learning: the company running the most experiments against the lowest cost per experiment wins
  • There are things you can’t predict until you perform tests
  • Lean and Agile as Scale: as a dancing elephant (see the photo in the slides)


Some tweets reactions about it:

  • @delphinocons: Reducing dev. cycle time is 10x more beneficial than improving dev. efficiency I can see him build the case for MD(S)D from here. #cg2012
  • @tvdstorm: No more heavy process. This is going the way of the dinosaurs. @JanBosch #cg2012
  • @stevekmcc: @JanBosch 10% more revenue vs. 10% less dev costs (“efficiency”) assumes rest of org has zero incremental cost & demand is infinite #cg2012
  • @delphinocons Amazon releases new software every 11 seconds… wow. @JanBosch @ #cg2012
  • @pmolinam  ”Learning: the company running the most experiments against the lowest cost per experiment wins” @janbosch #cg2012 #innovation –> Google
  • @tvdstorm: @JanBosch’s keynote: crowd sourcing the scientific method at a very high speed. Continuous integration -> deployment -> AB testing. #cg2012
  • @pmolinam Arquitecture goals accordingly to @janbosch Simplify, Decoupling, Lean and agile at scale, End to end quality, fight design erosion #cg2012
  • @pmolinam “Interconnected teams and organizations asymptotically reduce productivity to zero” -> Twitter? @janbosch #cg2012 #innovation
  • @pmolinam Decoupling: NO versions! only the deployed one! @janbosch #cg2012 #innovation <- Hard to achieve out of the web
  • @pmolinam Very great keynote! by @janbosch #cg2012 #innovation very inspiring and totally on-topic.
  • Great slide @janbosch #cg2012 implication http://t.co/OBiSKfjZ
  • @MarkDalgarno: Every engineer at @intuit spends at least 1 day a year with a consumer seeing how their products are used in practice. @janbosch at #cg2012

As commented, this was a great keynote with respect to rethinking the role of MDSD within business organizations and how MDSD is still a great and undiscovered lever to drive innovation inside a company from within the IT and R&D departments. Dear CIOs, take note.

After the keynote, time for a quick coffee and I move quickly to prepare my session on “Multichannel User Interfaces”

In this session, I presented some issues we are working on at Icinetic. When developing a new service or product, UIs are expensive to develop if you have to offer the service in the leading edge mobile platform. The market is fragmented and thus provides several competing platforms and technologies with which to build a UI: XAML/C#, JavaFX/Java, Android, Objective-C/Cocoa, HTML5/Javascript with a JS framework, etc. The technology is changing quite fast and there is an open war to determine who will dominate the mobility space. Developing for any platform’s target UI technologies is neither cheap nor maintainable; is well: they don’t scale. It is in this context where Modeling device independent User Interfaces makes complete sense for business software. I demonstrated one of the prototypes we are building — called Oz UI — which utilizes my work on conceptual user interface patterns and is able to declaratively specify and prototype the UI then generate an implementation based upon any of a set of different widget technologies.

I got a very good feedback from the audience and feel that there is space for creating new tools in this domain that further alleviate the effort required by developers to work within the plumbing present in every development framework.

After the session and with the feeling of work completed, it was time for lunch and relaxation.

My last technical session was: tercnoC xatnyS srettaM (Concrete Syntax Matters) by Steven Kelly. The superbly selected title speaks for itself: when designing a new language there are many considerations to be taken into account in order to make the language pleasant for your users. Usability, concreteness, unambiguity, homogeneity, human perception of colors, forms and text are all relevant when addressing the design of a new language. The success of your language — measured in number of users — will be influenced greatly by these choices. See Steven in action.

Following our last coffee/orange-juice break, we all moved to the closing session: a panel discussion on “Code Generation – how far have we come in 5 years?”, lead by Andrew Watson, who introduced the topic and the panelists: Wim Bast, Steven Kelly, Darius Silingas, and Markus Völter.

Some ideas that were discussed:

  • MDSD Commoditization
  • Moving from a technical discourse to the necessary business discourse
  • The complexity of SW is ever growing; MDSD helps manage the complexity

Some interesting tweets about the panel supporting it:

  • @lmontrieux: Final panel session begins at code generation #cg2012 http://t.co/ffQOqyQ6
  • @delphinocons: Wim Bast: competing DSLs and LWBs is good for innovation, but bringing MD(S)D to the market requires commodity as well. #cg2012 closingpanel
  • @pmolinam @CompSciFact “Before you can design a good DSL, you have to understand the D.” On-topic for #cg2012
  • @tvdstorm: Need: live prototyping environments to be able to try out many different designs. Modeling needs immediacy, liveness and directness. #cg2012
  • @delphinocons Wim Bast/Darius Silingas: we have to avoid focusing too much on the technical side in favor of the business benefits of MDSD. Agree! #cg2012
  • @pmolinam On SoC and abstractions in DSLs #cg2012 Panel
  • @stevekmcc We are building the complex systems as ever. DSL helps in this race. #cg2012

Summing up

Finally, I present my personal conclusions on the State of the Art in MDSD based on what I have seen at CG2012:

  1. Two main lines of work have arisen in MDSD seen this year:
    • Bottom-up using a low-level general language and adding extensions that raise the abstraction level (eg.: mbeddr over C or Webr-DNQ over Java) for a specific purpose (domain), and
    • Top-down using a high-level DSL/language with a specific, targeted level of abstraction in order to describe a domain (hiding technology details and other concerns) upon which code generation is applied (samples: Radarc, IFML, MetaEdit+).
  2. More effort is needed to explain ROI for potential MDSD consumers (customers).
  3. Increasing complexity and diversity (mobile) in technology pushes the market ever closer to MDSD (as there are no other choices).
  4. Technology independence (one of the foremost qualities of systems developed via MDSD) provides even greater value when technology is undergoing rapid advances (see the mobile arena for example).
  5. Some MDSD technologies are powerful yet also rather complex; more work is required to simplify their presentation in order to increase user adoption.
  6. User Interface modeling is gaining momentum this year (with at least four sessions), that being a great and challenging DSL domain.
  7. The cloud provides opportunities for applying MDSD; this context already showing adoption in transparent infrastructure (see Mendix for a sample), deployment and configuration (cloud operation).
  8. Multi-core CPUs provide opportunities for executing DSL specs with efficient parallel processing.


As always, CG2012 was a great opportunity to take the pulse of the MDSD community, providing a good overview of the current problems and areas of research. If you want to get in the loop don’t miss the next edition — join us in 2013.

I want to thank the work of Robert McCall. He contacted me and volunteered for polishing my sometimes rusty English and making the full text more readable.

CG2012 Summary, day 2

CG2012 Summary (part II)

After reviewing LWC2012 and CG2012 day 1, let’s continue with day 2.

Day 2, Thursday 29th

The morning started with the first keynote. Markus Völter led the session with the title “Domain-Specific Language Design – A conceptual framework for building good DSLs”.  Here Markus deep down with an ontology approach, reviewing the dimensions of DSL Design: covering nine topics: expressivity, coverage, semantics, separation of concerns, completeness, paradigms, modularity, concrete syntax & process. In the session, he focused mainly on expressivity, semantics, modularity and concrete syntax. I found specially interesting the dissection of types of language extension and composition providing detailed samples in each case. In summary, good and quality stuff as Markus used to deliver. As commented by Markus, this material will be published as a book “DSL Engineering” at the early 2013.

After the coffee break, I entered the Peter Friese’s tutorial on “Traditional and Model-Driven Approaches for Cross-Platform Mobile Development”. Peter demonstrated a very good knowledge of the mobility space presenting nor one or two alternatives but six (6) ways of developing cross-platform mobile applications considering pros and cons. Great talk covering native development (showing iPhone, Android and WP7), HTML5 and JavaScript frameworks like Sencha or jQueryMobile reviewing cross tools like phoneGap. It was a pity that the Wifi connectivity were failing and interrupted in some moments the flow of the demo, but Peter was able to overcome it and show what’s going on.

User Interfaces are always interesting to me so: Achim Demelt session’s was a must see session or me. “Mission: Impossible — Purely declarative User Interface Modeling”.

The session was very good. The slides are not enough, the accompanying demo shown the tool at work. Achim and his team created Silverlight based UIs using a Java back-end. The S4 environment presented is agile enough to model and generate UIs for the ERP domain Achim was targeting.

Next session for me was for the Jetbrains’s guys Maxim Mazin and Evgenii Schepotiev with the talk “Webr-DNQ — web application development with pleasure”.  They show the language extensions Jetbrains has designed over Java using MPS  to build in-house products like YouTrack. This is a very clear sample of the quote: eat your own dog food.

After a coffee, and back to action to a very different session: Steven Kelly lead the hands-on session titled “Have your language built while you wait”. Here some of us creating Language Workbenches where placed in a room with our laptops waiting for customers. During rounds of 25 minutes we were attending them showing the capabilities of each tool and solving a concrete and practical small problem proposed by the customer. 15 master craftsmen, representing 11 top language workbench tools, volunteered their time to build languages for participants’ domains. It was a very interesting format because it not only allows people to try new tools, but also to promote cross polinization between tool makers. From mi side I was there showing Essential and also have the chance to play a little with The Whole Platform with Riccardo Solmi and Enrico Persiani and take a closer look to Ensō with Alex Loh.

Steve prepared a good summary on this session (take a look for the details on each tools). I borrow here the video here :-).

 

So far, so good! Another day full of code generation, but the dessert was still missing.

This year CG2012 changed the relaxing punting trip on the river Cam in favor of a conference called “How Apollo we flew to the moon” by David Woods. The result: totally amazing! This guy presented us all a brief but detailed introduction to the Apollo systems and navigation procedures and then deep down on the specific problems on the Apollo XI, the mission where Aldrin, Armstrong and Collins engraved theirs names in the history.

Given the audience, David make special emphasis on the computer devices on board in the Apollo missions. Totally amazing the rudimentary technology used was good enough to fly to the moon and come back! Q&A delivered may geek questions about the Apollo mission that David responded with flying colors to impress even more the audience.

So I couldn’t resist, and bought my copy of his book and got it autographed by David (I was not the only one BTW). A good reading for sure, if you like space and/or engineering.

CG2012 Summary, Day 1

A personal CG2012 summary (part I)

Better later than never, here it is, finally, my summary on CG2012. In this serie of three post I will summarize my days at CG2012 in
Cambridge held on 28-30 of March. The full programme can be reviewed here http://www.codegeneration.net/cg2012/program.php.

Disclaimer: as usual CG2012 runs three sessions in parallel, so you are always losing ⅔ of the good stuff. In any way, this year we were able to alleviate this: attending with two more Icinetic colleagues (Rubén Jimenez  and Jonathan Cáceres) help in dividing the sessions and giving a full event coverage.

Day 1, Wednesday 28th

Ruben and I led one of the opening sessions for CG2012. In our talk Ruben presented in a practical way how the .NET platform has matured during these years and provided opportunities and enough base tools to use MDSD. Not been on the top of Eclipse/EMF is not an excuse anymore. In the demo time, Ruben shown also Radarc, our product at Icinetic to build and generate applications for different architectures. Slides of the session follows:

After the coffee break, I moved into the session “User Interaction Modeling: current status of the standardization process, from the requirements to the language” given by Marco Brambilla and Emanuele Molteni (from WebModels). Marco and Emanuele are pushing an standardization effort called IFML inside OMG to propose a UI standard based in their UI modeling experience. In the mid-term UML tools vendors could finally start adding support for UI Modeling and this is a good thing, per se. Other standardization efforts are on-going such us UIML in OASIS and Jean Vanderdonckt’s UsiXML in the W3C.

Time for lunch, and back to action. Enrico Persiani and Riccardo Solmi presented “Integrating model driven technologies in
the publishing industry”
. This was a very refreshing session, presenting a novel domain for MDSD: interactive books. Riccardo and Enrico presented the base XML used before for composing the books and how this approach become a maintenance nightmare also for each book. Using The Whole Platform they provided a projectional editor integrating images and colorization preview in a kind of WYSIWYG approach making the edition a more pleasant experience.

After that, I move on Markus Völter & Bernhard Merkle’s session on “mbeddr C: an MPS and model based, extensible version of the C programming language”.  Using MPS Markus and Bernard has extended the base C language to include safety and productivity features like unit tests support, type safe units, state machines, etc. Impressive bottom-up work inserting new features in a well-known language. mbeddr C have for sure real applications in the embedded software industry.

To end the day, Eric Jan Malotaux presented a very original case study. Under the title “Transforming a 15 year old model-driven application from C++ to Java” he presented the costs and migration efforts performed to migrate a legacy application modeled over a DB and generating C++ to a most modern EMF models and Java source code as output. Eric pointed out that many of the problems came from the different architecture (implicit) assumptions done in the source model. Not until understand it, they were able to solve the problems. Eric concluded that although it was not cheap and easy, it probably would be more expensive and painfuller if the source was only source code and not a model based with a unique semantic over it. Eric gave a great talk and was very prudent about giving only numbers he could backup with data.

This case, presented by Eric made me think twice about how MDD provides advantages years after it was build. At the end is always a problem of dealing with complexity: a model has lower complexity (more constraint) that pure source code.

We ended the evening in a more relaxed way in the nearby Castle Inn sharing good moments.

Summary: Language Workbenches Challenge 2012

LWC2012 logo

Two weeks ago, I honored the yearly tradition of traveling to Cambridge and join with my colleagues at the Code Generation Conference.

As commented before in this blog, this conference is something unique. It is not an academic oriented event: it has no papers and publications in the traditional way of an academic meeting. On the other hand, it shows live tools and samples of real technology in use in the industry and the latest research tools in the academia in the field of Software Modeling, Domain Specific Languages and Code Generation. If you want to meet and talk with the toolmakers this is definitely the place to come.

The event was intensively reported via twitter under the hashtags #cg2012 and #lwc2012.

In this first post I will review the LWC and write about CG2012 in the next one.

LWC2012

On Tuesday 27th, the 2nd edition of the Language Workbenches Challenge  took place. 32 of us meet there to see how 10 tools solved the Piping and Instrumentation problem. This year I was only an observer, no time from my side to prepare a solution, I hope to submit next year again.

Rui Curado presented his solution with AtomWeaver. He has added a graphical notation on the top of AtomWeaver in the late year. Great work.

Juha-Pekka Tolvanen shown a solution based in MetaEdit+. Based on the maturity of MetaEdit, the graphical nature of the problem and the experience of Juha-Pekka he excels his solution showing a use case of the system and later on going meta and showing us how it was done.

An UML solution was represented by Paul Zenden. Using Enterprise Architect as the base tool and complemented with Xtend/Xpand he was able to metamodel all the elements needed to describe the problem. He took advantage of the capabilities of the EA tool to add custom graphic symbols to the new symbols defined.

Tijs van der Storm presented a textual approach based on Rascal. Graphics were generated using an additional graphical library.

Alex Loh from Texas introduced us Ensō. Ensō is a new LW built on top of Ruby. Although a bit slow in its first version, it opens the doors a new way of LW taking advantage of the dynamic and interpretation capabilities of the Ruby language.

The Web is ubiquitous, and this issue is also reaching modeling concerns. Meinte Boersma in cooperation with Martin Thiede presented Más (Modeling As Service, más = means more in Spanish). Martin created and presented Concrete as a projectional editor based in Ruby and HTML5 in CG2011. Now Meinte has extended it to create a full web modeling environment targeting enterprise modeling.

Finally Marko Boger presented an implementation of the graphical language build using Spray. Spray is a project to provide textual DSLs to create graphical DSL on the top of Eclipse Graphitti.

Spray was conceived in the Castle Inn (our local CG reference pub) one year ago after a CG2011 session after people have the shared complaint about how painful and slow was to maintain GMF editors or to build them from scratch using Graphitti. Now one year later, Spray is a reality. Good job, very well done!

This edition was very diverse with respect a tools, approaches and solutions taken. Great workshop!

We ended the day with the traditional family photo. Special thanks to Angelo Hulshout for organizing it and Paul Zenden for proposing the challenge and the reference implementation (thanks Angelo for remember me it as it was).
See you next year!

LWC2012 family photo

 

Radarc 3.0 Released!

The arrival to my new job in Sevilla has coincided with the preparations and launch of a new product. We at Icinetic, are releasing Radarc 3.0. Radarc is a very easy to use code generator highly integrated with Visual Studio and targeting .NET technologies.

Radarc has the ability produce multiple architectures using the same base models and keeping in-sync generated artifacts when model element changes. Architectures and DSLs for defining the models are packaged in so called “Formulas”.

Currently, the following formulas are available for download and it is free for non-commercial usage:

  • ASP.NET Web Forms + Entity Framework
  • ASP.NET MVC 3.0 + Entity Framework
  • ASP.NET MVC 3.0 + Entity Framework + Azure Storage & deployment
  • Windows Phone 7
Radarc creates a complete prototyping application in seconds following the cycle: change the model, touch no line of code, build and run. Prototyping an application is a question of minutes, and obtain a first scaffolding of your application. Moreover, custom code can be inserted in specially designed locations that will be preserved in every regeneration lap.

Radarc 3.0 is available with three licensing models and its free for non-commercial usage.

Other technologies are available on demand, such as:

  • .NET 4.0 Domain Driven Design N-Layered Architecture
  • NHibernate & more to come…
Some cases of usage:
  • If you work in a .NET development shop, feel free to give it a try and give us some feedback.
  • On the other hand, if you want to start learning one of the previous technologies or architectures, you can use also Radarc to generate a reference sample application and start exploring the code.
  • If you are a experienced software architect and needs to evaluate SW architectures to benchmark them before choosing a winner arch for your project, think about the cheap possibility of generate the same application in two technologies and test how well performs for your specfic problem.
These days, I am learning a lot about the state of art here at Icinetic and I hope to start contributing to the bits very, very soon.
Bonus extra: a 20 minutes demo video (in Spanish) generating three architectures is available.
Next week we will be attending Code Generation 2012. If you are interested, join us and see a live demo or download it and give it a try!

Code Generation 2012

CG 2012 logo

Times fly! Code Generation 2012 is only two weeks ahead.

During the latest’s weeks we have been very busy combining day to day work with the preparation of material for the conference.

This year the conference will be held from 28th till 30th of March, in the habitual place: Murray Edwards College, Cambridge, UK.

The programme this year comes with some interesting sessions. I want to highlight some of them:

From my side, and once established in Seville, I will join the conference with some of my Icinetic colleagues. The activities we will be involved include:

As every year, looking forward to join and meet again with the CG community. See you there, guys!

New kid on the block: XCore

The guys at the Eclipse Modeling Framework leaded by Ed Merks @edmerks are working hard in XCore to provide ECore models a textual syntax. This is a needed feature from a long time.

It was also a good surprise to see how XCore syntax is quite close to Essential Meta. This is good news again: interoperability between the Java modeling  side of world (EMF) and .NET modeling efforts with Essential are now simpler with the arrival of XCore.

As Meinte Boersma @miente37 comment on twitter: semicolons seems to be the main difference.

BTW, a new release of Essential with minimal bug fixing has been released (0.5.1). Try it out!

¡Hola Sevilla!

Plaza de España, Sevilla (By CCSA Tom Raftery)

 

Model Driven Development is one of my favourite research topics. That’s why when the Icinetic guys contacted and offer me to join them to work together I had few arguments to resist the temptation and enroll.

Therefore, today I’m moving to Sevilla, in the south of Spain, to start a new phase of my life to work as the Chief Research Officer. Icinetic is an young MDD tool-maker and consultancy company.

I am quite excited to have the chance, the tools and the right team (both with the required business vision and the technical background) to focus on innovation and to create cool MDD & code generation tools.

We are going to enjoy it, for sure!