This website uses cookies for visitor traffic analysis. By using the website, you agree with storing the cookies on your computer.More information

Category: Information

Four years SIB Visions, a look back and ahead

My fourth year at SIB Visions has ended, once more it is time to take a deep breath, sit down and have a look at everything that has happened so far.

Demos

As mentioned in my last years post, there are several new demo applications available in the VisionX application store. These range from member portals to ERP applications and demonstrate not only the capabilities of JVx but also those of VisionX.

The layout of a simple screen with a table and a few editors.

The blog posting already outlines quite well what the demo applications encompass, so there is no need here to reiterate that. However, working in these was quite interesting because it is such a vast spectrum of applications, they have quite different requirements and as I've said before (at some point), one should always use the product one does create.

JVx Reference series

The JVx Reference series, a collection of blog posts which is supposed to give an overview over the most important aspects of the JVx Application Framework, has also grown considerably:

The series now covers all of the most important topics of the JVx framework. The new articles are outlining the remaining core concepts of JVx and give a good overview over these. If you feel like something is still missing and should be added to this list, feel free to drop us a message so that we can include it. We're also considering pressing these into an easier to digest format, to make the entrance of newcomers to JVx as easy as possible.

Kitchensink

After having it around for so many years, I've finally managed to properly introduce the JVx Kitchensink last year. It is a simple demo application which is supposed to give a good overview over all the components and controls of JVx.

JVx Kitchensink - Databinding

It is also my go-to application when it comes to debugging problems with JVx or testing the same functionality in different implementations. With this introduction it also received a much needed upgrade in aesthetics and functionality, for example it is now easy to see the source of each sample.

JVx and Lua

I'm quite a fan of LuaJ, a Lua JITC for Java. It allows you to quickly and easily deploy a complete Lua integration and adding scripting support to your application. I've worked with LuaJ on a private project of mine and have been very fascinated with how easy it is to create a scripting interface with it, so of course I could not withstand the temptation to create a proof-of-concept JVx bindings library with it.

These bindings give you access to the complete GUI layer of JVx, directly from Lua scripts. There is also a demo application for these bindings available.

JVx Lua Demo Application

Additionally, these bindings are used in the JVx FormLayout Visualization application, which makes it easy to inspect certain configurations of the FormLayout and see how it behaves with each change.

JVx FormLayout Visualization Application

Vaadin 8

In the late summer of 2017 we've started working on migrating from Vaadin 7 to Vaadin 8. That was not a small task, but was eased by the availability of the Vaadin 7 compatibility layer, which allowed us to gradually and deliberately upgrade parts and single components to the new version. All in all, the migration went rather smooth even though it was quite time consuming. On a similar note, we took the opportunity and enabled the new client-side layouts by default. These new layouts provide a much more sophisticated layouting mechanism and allow the applications to utilize layouts as it was previously only possible in desktop applications.

I already wrote briefly about these new layouts in my last years posting, and I will do so again. The main motivation behind these new layouts is to be able to use sophisticated layouts on the client side which are driven dynamically by the size of each component. There is no such support in Vaadin by default, so we had to create our own layouts for this. The main idea behind these layouts is to have the layouting mechanism completely on the client side and that it only operates with hints (constraints) given by the server side, if they are required at all. What is happening on the client side is that each component which is added to the panel is measured and then absolutely positioned inside the panel according to the layout. So technically it works exactly the same as the layouts in the Swing or JavaFX implementation, with the sole difference that the a layer of indirection is between the component management and the layout itself (namely the client/server connection).

While further work was done on these layouts, the need to actually being able to measure the time spent in each panels layout method was also required. Unfortunately, there wasn't a ready made solution at hand so I dug around the Vaadin internals and found that it has a profiler built-in. Nice, or so I thought, as it turned out, getting it to spit out the needed information was a little bit more work than anticipated, however, in the end I was able to gather quite valuable information from it.

The remaining time was filled with making sure that Vaadin 8 worked the same way as Vaadin 7 did, and replacing all the deprecated Vaadin 7 components wherever possible.

Documentation pipeline

As part of our efforts to unify our documentation and provide a better documentation experience (for us, our users and our customers) we've built a custom documentation pipeline. This pipeline allows us to feed it Markdown documents and spits out HTML, PDF and DokuWiki files. The main component which enables us to do that is Pandoc, the swiss-army knife when it comes to converting documents, and wkhtmltopdf, which allows to convert HTML documents into PDFs.

The requirements for this pipeline were simple enough:

  1. Documentation must be easy to write.
  2. Documentation must have a unified look, always.
  3. Conversion into multiple formats are required (HTML, PDF and DokuWiki).

That lead us to the decision to use Markdown with Pandoc. Markdown is easy to write and easy to read and does not dictate any sort of styling for the finished document (though it is possible with embedded HTML). Pandoc can convert the Markdown documents to HTML and DokuWiki and additionally allows us to manipulate these documents on the fly by using Lua filters. With these filters one is capable of manipulating the internal document representation and add or remove additional parts or modify already existing ones. In short, there is not much the Lua filters cannot do.

Conversion to PDF was a quite different matter, the logical choice was to convert the HTML to PDF, as this was already quite well laid out. wkhtmltopdf is a tool which uses the Qt Webkit engine to render a HTML document into a PDF, there are quite a few gotchas on the way, but overall it does that very well. Additionally, with the modern CSS variants, one can write rules for different devices, which make it easy to change the style of a document as soon as it is send to a printer.

Last but not least, we needed a way to easily start this pipeline on different machines. We've opted to use Apache Ant as launcher environment for the pipeline for a few simple reasons:

  • We are already familiar with it.
  • It allows to write complex logic using the ant-contrib tasks.
  • As it is a Java solution, we can write the script once and run it everywhere. So we don't need to duplicate logic over multiple files for different systems.

So the only thing that needed to be system dependent were the launcher scripts. We've put all the necessary logic into the Apache Ant build file (which is mostly copying files and executing different programs like Pandoc) which in turn is invoked through the systems specific launchers.

I'm not sure in what detail I can go here, but the most work here was obviously to create the HTML template and figuring out which options make the most sense to use. I can give you two pieces of advice, though. First, when working with Pandoc and Lua filters, the native (or JSON) output format of Pandoc is your best friend when it comes to debugging the document structure. Second, when trying to create print material with wkhtmltopdf, disable the automatic scaling of content, it will save you a lot of headache.

We're still looking into how we can make this pipeline available to the general public, whether it will be as a product or as (partial) FLOSS has not been decided by now.

EPlug

With the start of the new year we've found the time to squeeze some much needed bug fixes into EPlug and publish the 1.2.7 bug fix release. It brings various bug fixes, improved performance and a better DataBookView to the table.

Diving into database specific behavior

There were some problems surrounding our PostgreSQL JDBC implementation. As it turned out, when an exception occurs during a transaction, PostgreSQL is awaiting a user interaction on how to proceed. I've outlined everything related to it in another blog post, and as it turned out, PostgreSQL wasn't even the odd one during my tests.

The look ahead

That was a quite slow year, actually. The Vaadin 8 migration took a lot of time, but a few interesting projects sneaked in none the less. I'm currently working on another quite interesting project which I hope will be finished within the next time. Stay tuned for it.

Thanks to everyone at SIB Visions, it's been an awesome year and I'm looking forward to another one with all of you!

More about Application styling

We showed you the upcoming CSS edit feature of VisionX. This feature was used in our new tutorial Style Invoice application.

The standard Invoice application was more or less a standard VisionX application with some standard colors and icons. Not super fancy.

The Styled Invoice application demonstrates how easy it can be to apply your own Style to an application, created with VisionX.

The new tutorial is available in our documentation system.

The comparison of both applications

The standard application:

Standard Invoice application

Standard Invoice application

The styled application:

Styled invoice application

Styled Invoice application

JVx and PostgreSQL, supporting Savepoints

Transactions are an integral mechanism of databases, without them we could hardly ensure data consistency. Well, we could of course, but it would be a lot more work. JDBC has of course support for transactions, but it also supports so called "Savepoints", which we will have a look at today.

What are transactions?

Normally, I'd assume that everyone of you knows what transactions are and what they are used for, but today I will quickly rehash it to make sure. Transactions enable us to define groups of statements to be executed on a database with all either succeeding or all failing. Imagine the following statements which might be executed on a database as one action:

1: Insert record into A
2: Update record of B
3: Update another record of B
4: Insert record into C

This might happen in the background if you click a button or do something similar. This example is straightforward but if we start thinking about possible problems we will soon realize that there is a lot of potential for things going wrong. For example if statement #3 would fail we would miss records in the tables B and C. Overall, our data would be in an inconsistent state after that. To make sure that it does not happen, we can define these statements as a single transaction.

Transaction
  |-Insert record into A
  |-Update record of B
  |-Update another record of B
  |-Insert record into C
  \-Commit

If one of these statements fail, all changes done by previous statements can be undone and the database will return to the state before we started manipulating it. That is great, because we can now guarantee that even though an error has happened, the data will remain in a consistent state.

We could even extend that with additional application logic, for example we insert three records and then we check if the values of all records add up to a certain threshold, if yes, we simply undo the changes. Or if we notice that a constraint has been violated (though, pretty much all databases support constraint definitions in one way or the other). Additionally, transactions are completely isolated from all other connected clients. That means that if we start a transaction and insert a record into table A, all other clients will not see this record until we commit our changes. So the data is never in an inconsistent state, not even temporarily or transiently.

What are Savepoints?

Savepoints are sub-transactions within another transaction. It allows to undo parts of an ongoing transaction.

Another example:

Transaction
  |-Insert record into A
  |-Insert child record into B
  |-Insert child record into B
  |-Insert another record into A
  |-Insert child record into B
  |-Insert child record into B
  |-Insert another record into A
  |-Insert child record into B
  |-Insert child record into B
  \-Commit

This is a little bit more fabricated and complicated, assume we want to insert three master records with two detail records each. The following conditions apply:

  • If a master record fails to insert, nothing should be inserted.
  • If a detail record fails to insert, the corresponding master record should also not be inserted.

This is hard to do with a simple transaction, but is quite easy when it comes to using savepoints:

Transaction
  |-Savepoint 1
  |   |-Insert record into A
  |   |-Insert child record into B
  |   \-Insert child record into B
  |-Savepoint 2
  |   |-Insert another record into A
  |   |-Insert child record into B
  |   \-Insert child record into B
  |-Savepoint 3
  |   |-Insert another record into A
  |   |-Insert child record into B
  |   \-Insert child record into B
  \-Commit

We create a savepoint before the insert of the master record, if the insert of the master record fails we rollback the transaction, if the insert of a detail record fails we rollback to the savepoint we created earlier. This allows quite complicated and nested transactions, especially because there is no defined limit to how deep savepoints can be nested.

Error behavior

Of course it can always happen that a statement fails for one reason or another, so it is important to know how the database behaves once an error occurred. We built ourselves another simple example:

Transaction
  |-Insert record into A
  |-Insert record into B (this fails)
  |-Insert record into C
  \-Commit

We insert three records, and the second one fails to insert. How should the database behave in such a situation? Turns out that this differs between different database systems.

Silent/Automatic restore to a previous state

Many databases perform a simple "silent/automatic restore to a previous state", all changes done (if any) of the current statement are undone and the transaction can be treated like the failing statement never happened. With the example above, and assuming that we do not cancel the transaction on an error, the records would be inserted into A and C.

In our tests Oracle, MySQL/MariaDB, H2 and SQLite were all behaving this way.

Requiring manual recovery

PostgresSQL requires to perform a "manual recovery" from a failed statement. So that once an error occurred during a transaction, the user has to revert to a (manually) set savepoint or rollback the complete transaction. We will go into details on that later.

Reverting everything and happily continuing

MS SQL on the other hand has a quite different approach. When an error occurs during a transaction, all changes are (automatically) rolled back but the transaction can still be used. So in our example, only the record in C would be inserted.

PostgreSQL JDBC and Savepoints

Back to PostgresSQL and how it requires manual recovery. When a statement fails within a transaction in PostgreSQL, the transaction enters the aborted state and one can then see an error like the one below if further statements are issued:

Current transaction is aborted, commands ignored until end of transaction block.

What that means is simple that the connection/server is still waiting on input on what to do with the transaction. There are three possible ways to recover from there:

  1. Rollback the complete transaction
  2. Commit, which will be transformed into a rollback at this point
  3. Rollback to a savepoint

But these actions must be initiated by the user before the transaction can be further used (or not, if it is being rolled back completely). If one wants to emulate the behavior of other databases in PostgreSQL, every statement that is executed within a transaction has to be "wrapped" with a savepoint, in pseudo-code:

begin transaction
    savepoint
    try
        execute statement
        release savepoint
    catch
        rollback to savepoint
   
    ...
commit

Even though that seems tedious, that is not the case. If you have such a requirement you already have central point through which all statements pass before being executed, so this can be implemented easily.

JVx and Savepoints

In our case it is DBAccess, our main datasource. Because every database interaction has to pass through DBAccess (in one way or the other), we could easily implement such emulation at a low-level and it is automatically available to all users. To be exact, DBAccess has received internal support to wrap all statements in savepoints when configured to do so. This configuration possibility is protected and is currently only used by the PostgresSQL DBAccess extension. It does exactly what it says on the tin and is only active when enabled and automatic commits have been turned off. So this change does have no effect on any other database but PostgreSQL.

We already have plans to extend this basic savepoint support with a public API which allows users of JVx to utilize this new functionality. One of the ideas that we are currently discussing is to provide the ability to create named savepoints. A simple mockup of that idea:

  1. // Switch off auto commit to use transactions.
  2. dbAccess.setAutoCommit(false);
  3.  
  4. // Insert some data.
  5. dbStorage.insert(aRecord);
  6. dbStorage.insert(anotherRecord);
  7.  
  8. // This part is optional.
  9. try
  10. {
  11.     dbAccess.setSavepoint("NAME");
  12.    
  13.     dbStorage.insert(yetAnotherRecord);
  14.     dbStorage.update(someOtherRecord);
  15. }
  16. catch (DataSourceException dse)
  17. {
  18.     log.error(dse);
  19.     dbAccess.rollbackTo("NAME");
  20. }
  21.  
  22. dbAccess.commit();

As said, we are currently in the process of discussing such possibilities but definitely want to provide such an API at one point.

Conclusion

As it turns out, the "special" behavior of PostgreSQL isn't as special as it seems to be. It is a design decision that was taken and that is understandable. Changing this behavior now, 20+ years in, is out of the question as it would require a substantial effort to make sure that this behavior is backwards compatible. The gains from such a change on the other hand would be very little, as it is a quite specialized case in which this behavior matters and the "fix" is rather easy.

Documentation redesign

Some years ago, we put all our documentation into our Forum. This wasn't a perfect system for our documentation, but it was in one place. But now we know that the Forum was a bad idea for framework and product documentation.

Last year, we started an evaluation for a new documentation system and were shocked about the quality and features of available solutions. We found a SaaS solution but we didn't want to store our documentation elsewhere. After some research we made a decission and our new documentation system was DokuWiki. The system itself is very nice but the standard theme is not pretty. But it's pluggable and there are many nice looking themes and very useful plugins.

After some customization we were very happy with the result and the migration from our Forum to the DokuWiki was not a hard task. It's still in progress, but we're happy to announce that our new docu system is available for you!

The new link is https://doc.sibvisions.com/

The new system contains some many information about VisionX as well. We didn't publish this information in the past.

Have fun with our new documentation system!

Our Forum will be continued, but only as place for questions and answers.

JVx Reference, CellEditors

Let's talk about CellEditors, and how they are decoupled from the surrounding GUI.

What are they?

While we've already covered large parts of how the GUI layer and the model of JVx works, the CellEditors have been left completely untouched and unmentioned. One might believe that they can be easily explained together with the Editors, however, they are a topic on their own, and a complex one from time to time that is.

The difference between Editors (the UIEditor for the most part) and CellEditors is that the Editors only provide the high-level GUI control, while the CellEditors provide the actual functionality. Let's take a look at a quite simple screen.

The layout of a simple screen with a table and a few editors.

We see a window with a table on the left and some editors on the right, simple enough. Now these components we are seeing are UIEditors, not CellEditors. The CellEditors themselves are only added as child components to the Editors, so the Editors are basically just panels which contain the actual CellEditor.

The same scree but with the CellEditors differentiated from the UIEditors which contain them.

So technically every UIEditor is just another panel which gets the CellEditor added. The CellEditors themselves follow the same pattern as all GUI components in JVx, there is the base interface, an eventual extension of technology components, the implementation and finally the UI object. They are, however, rarely directly used in building the GUI, but mostly only referenced when building the model.

Why are they?

If you want to make GUI editor components, I know of two possible ways from the top of my head to achieve that: You create dedicated editor components for the datatypes that are available, for example a NumberEditor, TextEditor and so forth. Or you create one editor component which acts as a mere container and allows to plug in any wanted behavior for the type you're editing.

We've opted for the second option, because it means that the GUI is actually decoupled from the datatypes (and in extension the data) of the model. If we'd have separate components for each datatype, changing the datatype of a single column would mean that you'd have to touch all editors associated with that column and change that code, maybe with rippling effects on the rest of the GUI. With the CellEditors, one can change the datatype of a column and not worry about the GUI that is associated with that column. The CellEditor is changed on the model once and that change is automatically picked up by all Editors. Which also means that one can define and change defaults very easily and globally.

Of course one can also set the preferred or wanted CellEditor directly on the Editor, instead of using the one defined in the model, should the need arise.

And the table?

The same applies to the Table. Theoretically, every cell of the Table can be viewed as a single Editor, for this context at least. So a single cell behaves the same as an Editor when it comes to how the CellEditors are handled.

How many are there?

JVx comes with a variety of CellEditors out of the box:

  • Boolean
  • Choice
  • Date/Time
  • List
  • Number
  • Text
    • HTML
    • Multiline
    • Password
    • Standard

With these nearly all needs can be covered. If there is need for a new one, it can be created and added like any other UI component.

Using CellEditors

As said previously, which CellEditor is used is defined primarily with the model, for example:

  1. private void initiliazeModel() throws ModelException
  2. {
  3.     dataBook = new MemDataBook();
  4.    
  5.     ICellEditor cellEditor = new UITextCellEditor();
  6.     IDataType dataType = new StringDataType(cellEditor);
  7.     ColumnDefinition column = new ColumnDefinition("COLUMN", dataType);
  8.  
  9.     RowDefinition rowDefinition = dataBook.getRowDefinition();
  10.     rowDefinition.addColumnDefinition(column);
  11.    
  12.     dataBook.open();
  13. }
  14.  
  15. private void initializeUI() throws ModelException
  16. {
  17.     editor = new UIEditor(dataBook, "COLUMN");
  18.    
  19.     add(editor);
  20. }

We can see that every column has a datatype and every datatype has a CellEditor. That allows the model to provide the actual editing functionality without changing the GUI code. The Editor, when notifyRepaint() is called, will fetch the CellEditor from the datatype and use it. Additionally, there is a technology dependent default mechanism which allows this system to work even when the UI classes are not used.

Let's do a step by step explanation of what happens:

  1. The model is created.
  2. The GUI is created.
  3. The model invokes notifyRepaint() on all bound controls.
  4. The Editor gets the CellEditor from the model and adds it to itself.

One moment, instance sharing?

If we revisit at the example code from above, we will notice that the CellEditor instance is set on the model and must then be used by the Editor. That means that a single CellEditor instance is used for all bound Editors. We all know that sharing instances in such a way can be fun, but in this case it is not a problem because CellEditors are only "factories" for the actual editing components.

The ICellEditor interface does actually only specify two methods, whether it is a direct cell editor, and the factory method for creating an ICellEditorHandler. The CellEditorHandler is the manager of the instance of the component that is going to be embedded into the Editor.

  1. notifyRepaint() is called on the editor.
  2. The Editor gets the CellEditorHandler from the CellEditor.
  3. The Editor gets the component from the CellEditorHandler and embeds it.

This mechanism makes sure that no component instances end up shared between different GUI components.

A closer look at the CellEditorHandler

If we take a good look at the CellEditorHandler interface, we see that it contains everything that is required for setting up a component to be able to edit data coming from a DataRow. One method is especially important, the getCellEditorComponent() function. It returns the actual technology component that is to be embedded into the Editor. That means that even though there are implementations for the CellEditors on the UI layer, the actual components which will provide the functionality for editing the data are implemented on the technology layer. A short refresher:

The different layers of JVx.

Revisiting our simple screen from above, we'd actually need to represent it as something like this:

FormLayout with one added component.

Because the embedded components in the Editor are actually on the technology layer.

CellRenderers

There is another small topic we need to discuss, CellRenderers. They follow nearly the same schematics as CellEditors but are used to display values directly, for example values in a table cell. The Table is also the primary component which uses them to display the cell values until the editing is started. For simplicity reasons, most CellEditors implement ICellRenderer directly and provide management of the created component. That is because the reuse of components for barely displaying values is easier does not contain as much error potential.

Conclusion

CellEditors provide an easy mechanic to allow to edit data, and more important, they are decoupled from the GUI code in which they are used in a way which allows the model to change, even dynamically. That enables programmers to create and edit screens and models quickly without the need to check if the GUI and the model fit together, they always do.

JVx Reference series

We wrote a lot of useful articles about JVx in the past weeks. You can use them as reference:

Here's the search result and the direct links to the articles:

Technologies and Factories
Resource and UIResource
Launchers and Applications
Application Basics
Custom Components
Events
DataBooks
The FormLayout
CellEditors

And the link to our JVx documentation.

Vaadin, let's hack the Profiler

Vaadin comes with a builtin Profiler which is only available during debug mode, which might not be available or reasonable. So, let us see if we can use it without the debug mode enabled.

It has a profiler?

Yes, there is one right builtin on the client side. You can activate it easily enough by adding the following to the widgetset:

  1. <set-property name="vaadin.profiler" value="true" />

But to see the results, you also have to switch the application into debug mode by changing the web.xml to the following:

  1. <context-param>
  2.     <description>Vaadin production mode</description>
  3.     <param-name>productionMode</param-name>
  4.     <param-value>false</param-value>
  5. </context-param>

This allows you to enter the debug mode of an application, though, that requires to restart the application (or in the worst case, a redeploy). One upside of testing new Widgetsets can be that one can apply them to any running application without modifications, because that is purely client-side. So changing the configuration of the application might not be possible or desirable.

There are some forum posts out there which talk about that it is enough to enable the profiler and see the the output of it being logged to the debug console of the browser, but that is not the case anymore.

Let's have a deeper look

The Profiler can be found in the class com.vaain.vaadin.client.Profiler and is comletely client-side. It will store all gathered information until the function logTimings() is called, which then will hand the gathered information to a consumer which can do with it whatever it wants. Now comes the interesting part, there is no public default implementation for the consumer provided. If you want to log what was profiled to the debug console you'll have to write your own. Even if there were, the function setProfilerResultConsumer(ProfilerResultConsumer) is commented with a warning that it might change in future versions without notice and should not be used - interesting. Also interesting is the fact that it can only be set once. Once set, it can not be changed.

Hm, looks a little bare bone. Of course there is an "internal" class that is utilizing it to send its output to the debug window, but we can't use any of that code, unfortunately. So let's see what we can do with it.

Send everything to the debug console

The easiest thing is to simply send all the output to the debug console of the browser, we can do this easily enough:

  1. import java.util.LinkedHashMap;
  2. import java.util.List;
  3.  
  4. import com.vaadin.client.Profiler.Node;
  5. import com.vaadin.client.Profiler.ProfilerResultConsumer;
  6.  
  7. /**
  8.  * A simple {@link ProfilerResultConsumer} which is outputting everything to the
  9.  * debug console of the browser.
  10.  *
  11.  * @author Robert Zenz
  12.  */
  13. public class DebugConsoleProfilerResultConsumer implements ProfilerResultConsumer
  14. {
  15.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  16.     // Initialization
  17.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  18.    
  19.     /**
  20.      * Creates a new instance of {@link DebugConsoleProfilerResultConsumer}.
  21.      */
  22.     public DebugConsoleProfilerResultConsumer()
  23.     {
  24.         super();
  25.     }
  26.    
  27.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  28.     // Interface implementation
  29.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  30.    
  31.     /**
  32.      * {@inheritDoc}
  33.      */
  34.     @Override
  35.     public void addProfilerData(Node pRootNode, List pTotals)
  36.     {
  37.         debug(pRootNode);
  38.        
  39.         for (Node node : pTotals)
  40.         {
  41.             debug(node);
  42.         }
  43.     }
  44.    
  45.     /**
  46.      * {@inheritDoc}
  47.      */
  48.     @Override
  49.     public void addBootstrapData(LinkedHashMap pTimings)
  50.     {
  51.         // TODO We'll ingore this for now.
  52.     }
  53.    
  54.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  55.     // User-defined methods
  56.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  57.    
  58.     /**
  59.      * Sends the given {@link Node} to the debug console if it is
  60.      * {@link #isValid(Node) valid}.
  61.      *
  62.      * @param pNode the {@link Node} to log.
  63.      * @see #isValid(Node)
  64.      */
  65.     private void debug(Node pNode)
  66.     {
  67.         if (isValid(pNode))
  68.         {
  69.             debug(pNode.getStringRepresentation(""));
  70.         }
  71.     }
  72.    
  73.     /**
  74.      * Tests if the given {@link Node} is valid, meaning not {@code null}, has a
  75.      * {@link Node#getName() name} and was {@link Node#getCount() invoked at
  76.      * all}.
  77.      *
  78.      * @param pNode the {@link Node} to test}.
  79.      * @return {@code true} if the given {@link Node} is considered valid.
  80.      */
  81.     private boolean isValid(Node pNode)
  82.     {
  83.         return pNode != null && pNode.getName() != null && pNode.getCount() > 0;
  84.     }
  85.    
  86.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  87.     // Native methods
  88.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  89.    
  90.     /**
  91.      * Logs the given message to the debug console.
  92.      *
  93.      * @param pMessage the message to log.
  94.      */
  95.     private native void debug(String pMessage)
  96.     /*-{
  97.         console.debug(pMessage);
  98.     }-*/;
  99.    
  100. }    // DebugConsoleProfilerResultConsumer

And we can attach it rather easily, too:

  1. Profiler.setProfilerResultConsumer(new DebugConsoleProfilerResultConsumer());

Make sure to do this only once, otherwise an Exception will be thrown, stating that it can only be done once. Also the application should not be in debug mode, otherwise the Vaadin consumer will be attached. Now that we've attached a consumer we can recompile the Widgetset and actually try it, and low and behold, we see output in the debug window of the browser. Quite a lot, actually, seems like the Profiler is used often, which is good.

Filter the results

Skimming through the results is tedious, luckily most browsers come with the possibility to filter the results, but that possibility is quite limited. If we are interested in multiple results, the easiest way will be to filter them on the server side, obviously we can do that just as easily:

  1. import java.util.HashSet;
  2. import java.util.LinkedHashMap;
  3. import java.util.List;
  4. import java.util.Set;
  5.  
  6. import com.vaadin.client.Profiler.Node;
  7. import com.vaadin.client.Profiler.ProfilerResultConsumer;
  8.  
  9. /**
  10.  * A simple {@link ProfilerResultConsumer} which is outputting everything to the
  11.  * debug console of the browser.
  12.  *
  13.  * @author Robert Zenz
  14.  */
  15. public class DebugConsoleProfilerResultConsumer implements ProfilerResultConsumer
  16. {
  17.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  18.     // Class members
  19.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  20.    
  21.     /** The names of nodes we want to output. */
  22.     private Set wantedNames = new HashSet();
  23.    
  24.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  25.     // Initialization
  26.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  27.    
  28.     /**
  29.      * Creates a new instance of {@link DebugConsoleProfilerResultConsumer}.
  30.      *
  31.      * @param pWantedNames the names of the profiler data which should be
  32.      *            displayed.
  33.      */
  34.     public DebugConsoleProfilerResultConsumer(String... pWantedNames)
  35.     {
  36.         super();
  37.        
  38.         if (pWantedNames != null && pWantedNames.length > 0)
  39.         {
  40.             for (String wantedName : pWantedNames)
  41.             {
  42.                 wantedNames.add(wantedName);
  43.             }
  44.         }
  45.     }
  46.    
  47.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  48.     // Interface implementation
  49.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  50.    
  51.     /**
  52.      * {@inheritDoc}
  53.      */
  54.     @Override
  55.     public void addProfilerData(Node pRootNode, List pTotals)
  56.     {
  57.         debug(pRootNode);
  58.        
  59.         for (Node node : pTotals)
  60.         {
  61.             debug(node);
  62.         }
  63.     }
  64.    
  65.     /**
  66.      * {@inheritDoc}
  67.      */
  68.     @Override
  69.     public void addBootstrapData(LinkedHashMap pTimings)
  70.     {
  71.         // TODO We'll ingore this for now.
  72.     }
  73.    
  74.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  75.     // User-defined methods
  76.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  77.    
  78.     /**
  79.      * Sends the given {@link Node} to the debug console if it is
  80.      * {@link #isValid(Node) valid}.
  81.      *
  82.      * @param pNode the {@link Node} to log.
  83.      * @see #isValid(Node)
  84.      */
  85.     private void debug(Node pNode)
  86.     {
  87.         if (isValid(pNode))
  88.         {
  89.             debug(pNode.getStringRepresentation(""));
  90.         }
  91.     }
  92.    
  93.     /**
  94.      * Tests if the given {@link Node} is valid, meaning not {@code null}, has a
  95.      * {@link Node#getName() name} and was {@link Node#getCount() invoked at
  96.      * all}.
  97.      *
  98.      * @param pNode the {@link Node} to test}.
  99.      * @return {@code true} if the given {@link Node} is considered valid.
  100.      */
  101.     private boolean isValid(Node pNode)
  102.     {
  103.         return pNode != null
  104.                 && pNode.getName() != null
  105.                 && pNode.getCount() > 0
  106.                 && (wantedNames.isEmpty() || wantedNames.contains(pNode.getName()));
  107.     }
  108.    
  109.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  110.     // Native methods
  111.     //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  112.    
  113.     /**
  114.      * Logs the given message to the debug console.
  115.      *
  116.      * @param pMessage the message to log.
  117.      */
  118.     private native void debug(String pMessage)
  119.     /*-{
  120.         console.debug(pMessage);
  121.     }-*/;
  122.    
  123. }    // DebugConsoleProfilerResultConsumer

Now we can simply pass the list of "interesting" event names to the consumer and see only these.

JavaScript hacking

But there is more, instead of setting a consumer we can attach ourselves to the JavaScript function and instead process each profiled section individually. So in the debug console we can simply run something like this:

  1. window.__gwtStatsEvent = function(event)
  2. {
  3.     console.debug(event);
  4. };

This will output every single profiler event into the console. Now, if we want to process these events, we must first understand there is always a "begin" and an "end" event, which respectively are marking the begin and end of an event.

We can now listen for a certain event and simply output how long it took, like this:

  1. window.__profiler = {};
  2.  
  3. window.__gwtStatsEvent = function(event)
  4. {
  5.     if (event.subSystem === "Layout pass")
  6.     {
  7.         if (event.type === "begin")
  8.         {
  9.             window.__profiler[event.subSystem] = event.millis;
  10.         }
  11.         else
  12.         {
  13.             console.log(
  14.                     event.subSystem
  15.                     + ": "
  16.                     + (event.millis - window.__profiler[event.subSystem]).toFixed(0)
  17.                     + "ms");
  18.         }
  19.     }
  20. };

Or, to go over the top, we could create a generic listener which informs us about everything:

  1. window.__wantedEvents = [
  2.     "Layout pass",
  3.     "Layout measure connectors",
  4.     "layout PostLayoutListener"
  5. ];
  6.  
  7. window.__profiler = {};
  8.  
  9. window.__gwtStatsEvent = function(event)
  10. {
  11.     if (window.__wantedEvents.indexOf(event.subSystem) >= 0)
  12.     {
  13.         if (typeof window.__profiler[event.subSystem] === "undefined")
  14.         {
  15.             window.__profiler[event.subSystem] = {
  16.                 averageRuntime : 0,
  17.                 count : 0,
  18.                 lastBegin : 0,
  19.                 lastEnd : 0,
  20.                 lastRuntime: 0,
  21.                 lastRuntimes : new Array(),
  22.                 minRuntime : 999999999,
  23.                 maxRuntime : -999999999,
  24.                 totalRuntime : 0
  25.             }
  26.         }
  27.        
  28.         var info = window.__profiler[event.subSystem];
  29.        
  30.         if (event.type === "begin")
  31.         {
  32.             info.count = info.count + 1;
  33.             info.lastBegin = event.millis;
  34.         }
  35.         else
  36.         {
  37.             info.lastEnd = event.millis;
  38.            
  39.             info.lastRuntime = info.lastEnd - info.lastBegin;
  40.             info.lastRuntimes.push(info.lastRuntime);
  41.             info.minRuntime = Math.min(info.lastRuntime, info.minRuntime);
  42.             info.maxRuntime = Math.max(info.lastRuntime, info.maxRuntime);
  43.             info.totalRuntime = info.totalRuntime + info.lastRuntime;
  44.            
  45.             info.averageRuntime = 0;
  46.             for (var index = 0; index < info.lastRuntimes.length; index++)
  47.             {
  48.                 info.averageRuntime = info.averageRuntime + info.lastRuntimes[index];
  49.             }
  50.             info.averageRuntime = info.averageRuntime / info.lastRuntimes.length;
  51.            
  52.             console.log(
  53.                     event.subSystem
  54.                     + ": "
  55.                     + info.count.toFixed(0)
  56.                     + " times at "
  57.                     + info.averageRuntime.toFixed(3)
  58.                     + " totaling "
  59.                     + info.totalRuntime.toFixed(0)
  60.                     + "ms with current at "
  61.                     + info.lastRuntime.toFixed(0)
  62.                     + "ms ("
  63.                     + info.minRuntime.toFixed(0)
  64.                     + "ms/"
  65.                     + info.maxRuntime.toFixed(0)
  66.                     + "ms)");
  67.         }
  68.     }
  69. };

And we now have a neat little system in place which displays everything we'd like to know, and it is easily extendable and modifiable, too.

Conclusion

As we see, we have quite a few ways to get the information from the profiler even without the application running in debug mode, some might not be as obvious as others, though. The interesting part is that many things are easily accessible on the JavaScript side of Vaadin, directly from the debug console of the browser, one only has to look for it.

VisionX, a short look at Validators

It is time to have a short look at Validators, what they are, how they work and how they can be used.

Okay, what are they?

A Validator is a component which is available to our customers who have purchased VisionX, it allows to quickly and easily add field validation to a form or any screen with records.

Validators in VisionX

Validators are readily available in VisionX as components which can be added to the screen.

VisionX Validators - Toolbox

It can be added to the screen by simply dragging it like any other component, but must be configured afterwards to know which field required validation.

VisionX Validators - Properties

There are three important properties to the Validator:

  • Binding: The field to which the Validator should be bound to. This works analog to selecting to a field for an Editor.
  • Automatic validate: If the validation process should be automatically performed on value changes.
    If this is checked, the Validator will listen for value changes on the specified field and will automatically run the validation action on every change. If not checked, the validation process must be run manually by calling Validator.validate() as needed.
  • Hide until first validate: If the Validator should stay hidden until at least one validation was performed.
    If this is checked, the Validator will not be visible until at least its validation has been called once, afterwards it will always be visible.

Validating values

To actually validate something, we have to attach an action to the Validator which will perform the validation. This can be readily done through the VisionX action designer, which provides everything needed to create such an action and we will not go into detail on how to do this.

We will, however, have a short look at the code of a simple action.

  1. public void doValidateNonEmpty(Validator pValidator) throws Throwable
  2. {
  3.     if (Logical.equals(rdbData.getValue("COLUMN"), ""))
  4.     {
  5.         throw new Exception("A value for the COLUMN must be entered.");
  6.     }
  7. }

It is a very simple action, the current value of the DataBook is checked and if it is empty, an Exception is. This is the most simple validation action one can create.

Validators in action

Once we have everything setup, we can put the Validators to good use. When the validation is performed, may it be automatically or manually, the validation actions will be invoked and if all of them return without throwing an Exception, the Validator will display a green check mark. However, if the actions should throw an Exception, the Validator will display a red "X".

VisionX Validators - Failed

Manual validation

Manually invoking the validation process as needed is quite simple by calling Validator.isValid(), which will return either true or false.

  1. public void doSaveButtonPressed(UIActionEvent pEvent) throws Throwable
  2. {
  3.     if (validator.isValid())
  4.     {
  5.         rdbData.saveSelectedRow();
  6.     }
  7.     else
  8.     {
  9.         labelError.setVisible(true);
  10.     }
  11. }

Above you see a sample action which manually performs the validation process and either saves the data or sets an error label to visible.

The ValidationResult

One can quickly end up with many Validators in a single screen, which might make it difficult for the user to directly see why a field is not correctly validated. So it suggest itself that there should be a short summary close to the save button to make sure that the user is readily provided with the information why the action could not be performed. For this scenario there is the ValidationResult, which is another component which can be added to the screen from the toolbox.

It will automatically find all Validators in the screen and will perform their validation as needed. Afterwards it will gather all error messages and display them in a list.

VisionX Validators - ValidationResult

The ValidationResult can be used similar to the Validator in an action.

  1. public void doSaveButtonPressed(UIActionEvent pEvent) throws Throwable
  2. {
  3.     if (validationResult.isValid())
  4.     {
  5.         rdbData.saveSelectedRow();
  6.     }
  7.     else
  8.     {
  9.         labelError.setVisible(true);
  10.     }
  11. }

Additionally, there is the clearMessage() method which allows to clear the list of errors.

Conclusion

The Validator and ValidationResult provide quick and easy means to add data validation to forms and screens with records and can be added and configured completely through the VisionX designer. Additionally, it provides a rich API which allows it to be easily used when writing the code manually or extending it with additional functionality.

JVx Kitchensink, a simple JVx showcase application

As we've just noticed, we've neglected so far to properly introduce the JVx Kitchensink, a (very) simple showcase application for JVx and its controls. We'd like to rectify that now.

A simple showcase, not a demo application

The JVx Kitchensink can be found on GitHub and is available under an Apache 2.0 License. It demonstrates nearly all controls and components of the JVx framework and simple and easy to digest examples.

JVx Kitchensink

As some of you might know, there is also the JVx Hello World Application. The Kitchensink does not intend to supersede the Hello World Application, quite the opposite, the intention is to complement it. The Hello World Application is demonstrating how to quickly and easily set up a complete JVx application and have it up and running in mere minutes, with focus on the lifecycle of the application. The Kitchensink on the other hand demonstrates each individual component and control of JVx, and completely neglects the "normal" lifecycle of a JVx application.

Samples

The button bar on the left allows you to quickly access each example, for example the one about databinding.

JVx Kitchensink - Databinding

Newly added has been the feature that you can now see the source code of the selected sample right inside the Kitchensink, simply select the "Source" tab.

JVx Kitchensink - Databinding (Source)

Here I have to mention the awesome RSyntaxTextArea, which is a Swing component and provides highlighting and editing functionalities for source code, and is obviously used by us.

Regarding the lifecycle

As said before, the Kitchensink as not a good example of the lifecycle of a JVx application, as outlined in JVx Reference, Application Basics.

Packaged and ready

The Kitchensink does already come as a ready to use Eclipse project with launchers, so all you have to do is import it into Eclipse and be ready to go. There is also an Ant build script and instructions on how to launch the readily compiled jars.

Last but not least, it does provide a nice test bed for most of the functionality of JVx and demonstrates most concepts in a neatly packaged manner. We've been using it excessively during the development of the JVx JavaFX frontend bindings and it can be as simply used to test new concepts and custom components.

Once again, the link to the KitchenSink GitHub repository

VisionX Demo Application

With the preparations for the VisionX 2.4 Update 1 we've also created several simple but telling demo applications. Let's have a look at these.

Demo-AMLD

We start our short tour with the first demo, the "Aircraft Manufacturing Line Dashboard" which is emulating a dashboard as it can be found in various factories and manufacturing companies.

DemoAMLDDemoAMLD

Its main frontend is a Swing based GUI which displays (randomly generated) data, additionally there is a Vaadin powered frontend which displays the raw data. The main highlights of this demo are:

  • Capability to customize technology-specific components.
  • Utilizing custom components in the VisionX designer.
  • Usage of cell formatters.
  • Making WorkScreens only available in certain environments.
  • Data is generated with either the Random object or OpenSimplexNoise.

Demo-BIELS

This is a demo of a typical dashboard and master data forms of an ERP application, in this case a small company focused on import and export of chemical/biochemical materials and similar.

DemoBIELSDemoBIELS

All WorkScreens are fully functional in Swing and Vaadin environments. The main highlights of this demo are:

  • Demonstration of feature parity and interchangeability between Swing and Vaadin.
  • Creation of custom controls for displaying data.
  • Fully and easily editable with the VisionX designer.

Demo-CNASP

Next up is a demonstration of the ability to create end-user facing web portals with JVx and VisionX.

DemoCNASPDemoCNASP

The whole application is designed to function on large and small displays alike, like those of smartphones. The main highlights are:

  • Demonstration of extensive customization via CSS.
  • Differing layouts depending on the environment (Desktop, Tablet, Smartphone).

Demo-DMARS

Accessing data and documents on the go is always kind of a hassle, but we can make it easier and more comfortable by providing easy to use GUIs for such occasions.

DemoDMARSDemoDMARS

This simple document management system shows off a simple and easy to use GUI for accessing and manipulating documents in the storage. The highlights are:

  • GUI designed to be used on small, touch-enabled displays.
  • Extending the Table to allow checkbox based multi-row selection.
  • Inserting and retrieving files from a database.
  • The new Upload component available in ProjX.

Demo-Reports

Last but, not least, is the "big guide" of how to use reporting in VisionX.

DemoReportsDemoReports

This application will take you step by step through the capabilities and features of the VisionX reporting solution. Every WorkScreen demonstrates another feature complete with a description, example, access to the template and final report and of course the related documentation.

  • A step-by-step introduction to the VisionX reporting solution.
  • 12 examples with easy to read and documented source code.
  • All parts easily accessible: Source code, Template, Data and Report.

Availability

All these applications are available in the VisionsX Solution Store, on demand. The source code is licensed under the permissive Apache License.