JVxEE 1.2

Post to Twitter

JVxEE version 1.2 is out!

The good news

JVxEE is now available from Maven central, that means that you can add it as dependency to your Maven projects:

<dependency>
    <groupId>com.sibvisions.jvx</groupId>
    <artifactId>jvxee</artifactId>
    <version>1.2</version>
</dependency>

The first of the two major changes are that we fixed possible exceptions that might be thrown by JPAStorage.getEstimatedRowCount(ICondition), it should now work under all situations.

The second change is the handling of foreign key columns. Previously, foreign key columns where named with the pattern "REFERENCEDTABLE_REFERENCEDCOLUMN", which can lead to collisions if there is more than one column referencing the same table and primary key. So it was possible that you would end up with two columns with the same name, which of course can't be handled by the storage and databook correctly. We devised a new naming scheme and from now on the foreign key columns are named with a combination of the referencing column and the referenced column.

An example:

@Entity public class A
{
    @Id private int id;
    private B source;
    private B target;
   
    // Getters/Setters
}

@Entity public class B
{
    @Id private int id;
    private String name;

    // Getters/Setters
}

With 1.1 the generated columns would look like this, for entity "A":

ID          BigDecimal
B_ID        BigDecimal
B_NAME      String
B_ID        BigDecimal
B_NAME      String

And with 1.2:

ID          BigDecimal
SOURCE_ID   BigDecimal
SOURCE_NAME String
TARGET_ID   BigDecimal
TARGET_NAME BigDecimal

This is definitely an improvement!

The bad news

There is always a downside :(

The changes in the foreign key column naming scheme, to avoid collisions, also mean that most foreign key columns do now have a different name. You'll have to check your code for usages of the now differently named columns.

But there is also an upside! With EPlug you will find those easily.

Usage example

JVxEE provides the possibility to utilize the Java Persistence API (JPA) as backend for storages and databooks. JPA is powered by POJOs, like these:

@Entity public class Aircraft
{
    private String country;
    private String description;
    @Id @OneToMany private String registrationNumber;
}

@Entity public class Airport
{
    @Id @OneToMany private String code;
    private String country;
    private String location;
    private String name;
}

@Entity public class Flight
{
    @OneToOne private Aircraft aircraft;
    private String airline;
    @OneToOne private Airport airportDestination;
    @OneToOne private Airport airportOrigin;
    @Id private String flightNumber;
}

This is an extremely simplified model for airline flights.

There is an aircraft that can be used, airports that can be flown to and from and the flight itself. Flight is referencing both, the aircraft and the airport. Now we only need to tell JPA about these classes by placing a persistence.xml in the META-INF directory, like this one that we use for our unit tests:

<?xml version="1.0" encoding="UTF-8" ?>
<persistence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://java.sun.com/xml/ns/persistence              
                                 http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"

             version="2.0" xmlns="http://java.sun.com/xml/ns/persistence">

  <persistence-unit name="test" transaction-type="RESOURCE_LOCAL">
    <class>com.sibvisions.rad.persist.jpa.entity.flight.Aircraft</class>
    <class>com.sibvisions.rad.persist.jpa.entity.flight.Airport</class>
    <class>com.sibvisions.rad.persist.jpa.entity.flight.Flight</class>

    <properties>
      <property name="javax.persistence.jdbc.driver" value="org.hsqldb.jdbcDriver" />
      <property name="javax.persistence.jdbc.url" value="jdbc:hsqldb:hsql://localhost/db" />
      <property name="javax.persistence.jdbc.user" value="sa" />
      <property name="javax.persistence.jdbc.password" value="" />

      <property name="eclipselink.ddl-generation" value="drop-and-create-tables" />
      <property name="eclipselink.ddl-generation.output-mode" value="database" />
      <property name="eclipselink.logging.level" value="FINE"/>
    </properties>
  </persistence-unit>
</persistence>

(Sure, it's also possible without manual XML mapping)

Now all that is left is creating a new storage that uses the JPA:

EntityManagerFactory entityManagerFactory = Persistence.createEntityManagerFactory("test");
EntityManager entityManager = entityManagerFactory.createEntityManager();

JPAStorage storage = new JPAStorage(Flight.class);
storage.setEntityManager(entityManager);
storage.open();

And that's it! From here on there is only JVx code.

EPlug 1.2.0

Post to Twitter

Great news for everyone who uses, or plans to use our Eclipse plugin called EPlug, a new version  is available! Version 1.2.0 brings a whole truckload of new features and bug fixes.

Changes (Pro and VisionX)

  • Completion for resources is now more reliable
  • Sped up the time it takes for VisionX to reload a file if a change was made in Eclipse
  • MetaData changes are now properly handled
  • When selecting from VisionX to Eclipse it will be made sure that the server and client class both show up
  • Added the "Checkl File (EPlug)" command, which allows to easily re-run all checks on the current file
  • The hyperlinks for server calls/actions are now "fuzzy", which means that if there is no function with that signature, the next best function will be the target
  • Improved the error messages if there was a problem when checking databooks
  • Added completion for generic resources, which means you now can get completion for any resource in your project
  • The call to UIImage.getImage(String) is now checked at compile time if the resource exists
  • By default, databooks are now checked while you're typing for instant feedback. This can be disabled in the project properties
  • Added a view that displays all databooks in the current file, including column name and type
  • Added support for the constructor of UIEditor
  • Dramatically increased the speed of compile time checks

Eclipse Mars

Also we have tested and developed EPlug with Eclipse Mars, which means that you can use it in the latest and greatest Eclipse release version. Of course we haven't dropped support for older versions, EPlug should continue to work even on Eclipse Juno.

Download

You can install EPlug directly from the Eclipse Marketplace.

EPlug - The Big Guided Tour

Post to Twitter

Since last year we are offering an Eclipse Plugin that integrates  the JVx workflow into Eclipse. Now that EPlug 1.2 has been released, we believe it's long overdue to give you a guided tour of the experience that EPlug is offering.

Trial

We offer a free trial period so that you can test EPlug without any problems. The first time you start Eclipse with EPlug you will be asked if you'd like to only test EPlug, or if you'd like to select an already purchased Pro license.

Showing the EPlug license welcome screen.

When clicking the Trial button, a new trial license will be issued and you'll be able to evaluate EPlug with all features for 30 days.

Showing the trial window.

'First Run' Wizard

One of the biggest usability improvements compared to previous versions is discoverability of how to use and activate EPlug. With the new version we have added a "First Run Wizard" which allows to directly activate EPlug on selected projects.

First Run Wizard

Usage

EPlug integrates seamlessly into Eclipse, but to use most of its features it has to be activated on each project you want to use it with. This can be done in the previously shown First Run Wizard, or by right-clicking a project and selecting "Activate EPlug" under "Configure".

Showing how to activate EPlug.

From there on you will be able to use code completion, compile time checks and all the other features on this project.

Better and extended commands

A handful of commands are added by EPlug, which make it faster to navigate in JVx projects.

Go to complement class

It's often the case that you want to go from the server to the client class, or from the client to the server class. Most of the time in Eclipse this involves expanding trees and looking for the correct class in the Package Explorer. That is why we've implement the command "Go to complement class", which enables you to quickly jump from the server to client, or the other way round.

The context menu of Eclipse showing the "Go to complement class".

As you can see the command is available from the context menu, but you can also bind it to a key in the "Keys" preferences.

Showing the motion of the command.

Open Declaration

The "Open Declaration" command, sometimes known by its key binding "F3", allows you to jump to the declaration of whatever is currently under the cursor. We've extended this command with the possibility to jump to the declaration of event handlers (actions), server methods and storage's. If the item underneath the cursor is not handled by our extension, the default "Open Declaration" command will be invoked.

The context menu of Eclipse showing the "Open Declaration (EPlug)" command.

The declaration:

Showing the target of the command.

Because we've extended the "Open Declaration" command, you can bind the "Open Declaration (EPlug)" command also to "F3" and enjoy faster navigating in your JVx files without any downsides.

DataBooks

The biggest and most important feature of EPlug is its support for data books. EPlug offers code completion, compile time checks and more for column names on remote and local data books.

Code completion

Everytime you want to get or set a value, or wire up an editor you will receive code completion suggestions with all column names that are in the databook, it doesn't matter if those are remote or added locally.

Showing the code completion of columns.

Compile time checks

Of course we also added compile time checks, which means that you can never mistype a column name ever again.

Showing the compile time checks of column names.

With the newest version, these compile time checks are even active while you type!

Text hovers

When hovering above a column name, a simple text hover will inform you about the type of the column.

Showing the text hovers of columns.

Storage support

RemoteDataBooks require to have the correct name set so that the server side storage is found, of course we do also offer code completion, compile time checks, text hovers and hyperlinks for this.

Showing the code completion for storages.

Showing the compile time checks of storages.

The DataBook View

And last but for sure not least, with the newest version a feature has been added which I've been looking forward to for quite some time: The DataBook View. A view similar to the Outline view, which displays all databooks in the current file and all of its columns.

Showing the DataBookView.

Actions/Events and server calls

Support for actions, events and server calls is the second big EPlug feature. For events and actions we support a very Lambda-like system that uses reflection and strings. Obviously the compiler was never able to understand this system and provide feedback or support for it, but with EPlug this has changed.

Code Completion

Whenever you want to wire up an event, you'll now receive code completion for all fitting methods in the used class.

Action methods:

Showing the code completion for events.

Remote calls:

Showing the code completion for server calls.

As you can see in upper image, we also provide a fast and convenient way to create methods if necessary.

Compile time checks

During compilation the actions, events and server calls are checked for their correctness, and if there is a problem it'll be reported to you.

Showing the compile time checks of actions.

You can also see the quick fixes for this problem, which do not only offer to create the missing method, but also suggesting methods with similar names in case a typo happened.

Text hovers, hyperlinks and refactoring support

Additionally EPlug provides text hover and hyperlink support, that means that you can now jump to the methods by using your mouse (Ctrl+Left Click) or the "Open Declaration (EPlug)" command. Refactoring support has also been added, which means that you can now rename action/event handlers without having to manually search for all uses and change those.

Resources

Dealing with resources can often be a pain in the neck, especially if you constantly have to look up the path and check if you're now using the correct image. Because we also felt these pains, we've added functionality to EPlug to make sure that working with resources becomes easy and painless.

Support for UIImage

Whenever you use UIImage methods, you can now enjoy code completion, previews of the images and compile time checks.

Showing the code completion for UIImage.

Preview:

Showing the preview of images in UIImage.

Compile checks:

Compile time checks of resources.

Generic resource completion

For all other resources, we do only offer a "generic" code completion system and no preview. Still, this is a huge help.

Showing the code completion for other resources.

Comments

One of the more simple and not so obvious features of EPlug is that it allows to have code completion of the current class in comments, and also provides the JVx category separators.

Showing the code completion for comments.

Separator:

Showing the code completion for comments.

Action/call completion in comments:

Showing the code completion for comments.

VisionX support

Now we've arrive at the big final, support for VisionX. For all of you who do not know VisionX, it is our product for rapidly building applications from scratch or migrating already existing systems. It allows to build GUIs and the respective database backend within a matter of minutes. Even though VisionX allows you to build whole applications, from time to time you'll want to do something by hand and this is the great thing about VisionX, all projects and applications are automatically and by design already Eclipse projects. So all you need to do is import the project into Eclipse and start working on it. To improve this workflow further EPlug does offer various features.

Selection synchronization

The selection in Eclipse and VisionX can be automatically synchronized, so that whatever you're working on in the one application is also visible and selected in the other

Showing the selection between VisionX and Eclipse.

Automatic applying of changes

VisionX will also automatically refresh its current view if you change a sourcefile in Eclipse, allowing to rapidly and verify changes in a workscreen without the need to manually reload the workscreen.

How and where to get it?

EPlug comes in two flavors, EPlug for JVx and EPlug for VisionX, both are available from the Eclipse Marketplace and can be installed and used freely for 30 days. Afterwards a license is necessary to keep using all features.

One year SIB Visions, a look back and ahead

Post to Twitter

It's now been a year since I joined SIB Visions, I guess that's a good time to take a short break and look at everything that has happened so far.

Know the framework

The first day at SIB Visions was filled with getting familiar with the JVx application framework, including a crash course through the complete architecture of it. I have to be honest here and say that I have maybe understood half of what I was told, but it was an awesome and extremely useful overview over the framework.

Right after that I made myself roughly familiar with the framework by looking through the code, I got assigned my first tickets, simple feature requests and bugs, easy enough to do. The next weeks I read through quite many areas of the codebase and learned to navigate through it. Nothing too exciting.

FMB Importer - Oracle Forms to JVx

The first really big task I was assigned to was writing an FMB importer from scratch. In case you don't not know what FMB is, it is the file format used by Oracle Forms Builder to save the created forms. It basically comes in two flavors, binary and XML. The goal was to be able to import a form created with Oracle Forms Builder directly into VisionX, preserving as much information as possible and making the migration of the GUI as seamless as possible.

Reading the XML file is easy, processing its content not so much. The concepts of JVx UI and Oracle Forms UI are quite different. In JVx the controls are forming a tree structure/hierarchy, with the inheritance dictating the size and location of a control. Oracle Forms uses a coordinate approach, with each control knowing its exact coordinates and size on the form. This can be clearly seen in the following picture:

JVx/Swing and Forms hierarchy comparison

On the left is the JVx control tree (Swing), on the right the Summit example in Oracle Forms Builder, the XML format looks quite the same as the view in the designer. After some trial and error, I decided to take a two step approach to converting this structure:

  1. Convert the XML nodes into custom objects and place them in a tree structure.
  2. Convert the custom objects to JVx classes and perform the layouting in this step.

Step 1 starts rather easy, I encapsulated the XML nodes in a custom class to have a thin interface over them, that allowed me to be able to provide support methods and yet still have all the information of the XML node easily accessible. The tricky part becomes the creation of the tree structure and the conversion to the custom objects, because the controls of the form do not have any sort of structure and are split in two groups, graphical elements (lines, circles, etc.) and controls (text fields, radio button, etc.). The best working solution I found was to create a processor object that does know everything necessary to create the custom objects and also builds the tree structure. That processor starts with the window and finds all canvases that can be on that window, then it finds all controls and graphics that can be on these canvases. From there it recurses through all controls and graphical elements, always finding the children of the current one by checking their coordinates and size. Extremely simplified it looks like this:

function Container createContainer(Object parent)
    current = new Container

    foreach child on canvas
        if child is inside parent then
            if child is container then
                current.add(createContainer(child))
            else
                current.add(child)

    return current

"Extremely simplified" might be an understatement, because there are a lot of edge cases that need to be taken into account and you can't treat all elements the same, but basically that's the core principle of the processor. If you also go with naming your functions after what they are doing, don't be surprised if you end up with a function named transformItemNodesOnCanvasOnTabPageInBoundary. In the same step as the creation of the tree is the creation of a custom object hierarchy. I decided against converting them directly into JVx classes, because that would add a lot of logic to the processor and would mix two tasks, creating the tree structure and converting the objects.

Now that we have a nice tree of objects, converting them to JVx classes is easy. Ideally all custom objects that were created in the previous step would inherit from one base, and that base ideally would have a method like toJVxComponent(). With that in place converting the custom objects becomes only a matter of calling root.toJVxComponent() after the tree was generated. Again, the second part of this step is what gets you, the layouting. A lot of trial and error went into this one and while Oracle Forms does only support one layout, coordinates, JVx has multiple:

  • Pane
  • SplitPane
  • BorderPane
  • FormPane

So we need some form of heuristic approach to determine what layout we will be using for each container based on:

  • The number of children.
  • The position and size of these children.

Again, very simplified the approach might look like this:

function Layout determineLayout(Container container)
    if container.#children == 0 then
        return null
    else if container.#children == 1 then
        return container.firstChildren
    else if container.#children == 2 then
        return SplitPane
    else
        return FormPane

Again, this misses quite a few edge cases and optimizations. But now the really hard part is missing, how do we convert a coordinate base layout to a FormPane layout?

I tried many multiple methods but ended up using a Sweep line algorithm. The basic idea is to have a line sweep from left to right/top to bottom and creating anchors on the go as necessary.  We'll sweep from left to right, for example, and create a new anchor everytime we encounter the start of a new element and and element has ended before. Assume the following layout:

Layout

We'll sweep from left to right, create anchors and stretch all components to their respective anchors:

Second step, create anchors.

Secodn Step, stretch

Even though the layout looks quite nice already, there are two problems here. First, the second/third and fourth/fifth anchors are empty, they do not contain a component ("contain" as in a component that does start and end in that anchor). So we can eliminate all anchors that do not contain a component.

Third step, eliminate empty anchors.

Third step, stretch

The second problem are the overlapping components in the middle. We can easily fix that by pushing the components around.

Fourth step, overlapping elimination

All that is missing now is to two a second pass from top to bottom. So we now have everything in place to import the UI. Of course there are still some parts missing, like data binding and wiring up events. But that wraps up the approach to importing FMB files.

EPlug - JVx Eclipse Plugin

I enjoyed working on the plugin very much. The Eclipse environment was a completely new system to work with and I had never before written a plugin for a development environment. After the basic functionality was determined, I set to work and made myself familiar with Eclipse. As it turns out, the Eclipse plugin system is quite huge, but once you know the basics it is very easy to use. Every plugin consists of a simple "core" of files:

  • plugin.xml
  • Manifest.mf
  • Activator

The plugin.xml contains all the needed information for the Eclipse runtime to load and run the plugin. Also this is where you register all the extensions you want either to use or provide. The Manifest.mf contains some additional information and the Activator is the main lifecycle class for the plugin. Registering extensions is as easy as implementing the interface required by the extension point and then adding your implementation to the list of the extension point. Your class will be automatically instantiated and used from the class it is registered at.

The first big obstacle was implementing auto completion for various code parts, for example our listeners. As it turned out JDT (Java Development Tools) does only have limited support for it. If you use their parser, you can only have complete lines, it was not possible to get incomplete statements. So we had to rely on internal classes provided by the "JavaCompletionProposalComputer" and the "JavaContentAssistInvocationContext" which also allowed us to get the information needed for incomplete statements.

The other big part of the plugin is launching an application in a container, that allows to extract information about it (like lifecycle objects, server metadata, etc.). That part is quite big and complicated, so let me shortly outline the approach for it:

  • Create a new Server.
  • Gather all necessary dependencies for the application.
  • Create a ClassLoader that contains all these dependencies and inject it into the ObjectProvider of the server.
  • Create yourself a new session.

Obviously this is only a rough outline, for example there's the security manager. We took the approach to change the configuration of the application on the fly and inject our own security manager that way, which obviously allowed us to do everything within the application. Also our session is a custom implementation that can never expire.

From there it is only a small step to receiving metadata and other information needed to provide auto completion and compile time errors. Speaking of compile time errors, implementing those was as easy as eating a cake. The only two things needed are to register a build watcher, which will be notified of every file that is touched by the build process, and to run the file you want through the parser to receive the AST. Walking the AST is thanks to the visitor pattern very easy and you can quickly select only those parts of the AST that you're interested in.

One of the biggest features we can offer is auto completion and compile time support of DataBook columns. From a technical point of view, the support for RemoteDataBooks is kinda boring, we simply query the server for the metadata. However the support for MemDataBooks was a challenge. Because of the very nature of MemDataBooks, there only were two options to receive the needed MetaData:

  1. Run the code and extract the needed information.
  2. Parse the code/AST.

The first possibility got axed extremely quickly, as it would be too hard, if not impossible to do it in a sane manner. So we ended up with parsing the code/AST directly for any information about the MemDataBook which we tried to provide support for. This is a tedious work, as you basically have to consider all the permutations of creating a MemDataBook and everything related with it, for example the following code:

IDataBook dataBook = new MemDataBook();
dataBook.getRowDefinition().addColumnDefinition(new ColumnDefinition("COLUMN"));
dataBook.setName("NAME");
dataBook.open();

If we now need to get the metadata for this MemDataBook, it would be roughly the following steps:

  • Find the variable that contains the MemDataBook.
  • Find the declaration or last assignment of that variable.
  • Find all usages of that variables.
  • Get all values from the .getRowDefinition().addColumnDefinition calls.

This is quite some work and that does not take into account that the row definition might be put into a variable, or might be provided by a different variable altogether. In the end, it comes down to implementing all the edge cases that you run across, and I think we did that very well.

Also we had to create a few GUI forms with SWT and I have to say that it is very good.

I'm looking forward to return to this project for the 1.2 release and implementing even more awesome features and doing some refactoring on the components I did not quite get right the first time.

GUI Tests - Stress testing Vaadin and automating JVx applications

I was looking forward to this research project. I was always fascinated with the idea to have GUI unit tests, now I had the chance to actually do some research on them and implement everything necessary in JVx to make them possible.

You can read about the results of the Vaadin stress tests on a previous blog post.

After trying different possibilities, including the awesome Sikuli, we settled with AssertJ-Swing for Swing and Selenium for Vaadin. The first rough tests showed that testing was only reasonable possible if we would name all components by default and so we derived a default naming scheme and implemented it. While experimenting with AssertJ-Swing I realized that one important thing was missing from it, the possibility to easily record tests. Selenium already offers a simple recorder for most browsers, which simply records all events and exports them in the correct format, however AssertJ-Swing does not have such a facility unfortunately.

Swing itself is build on top of AWT, which allows to register a global event listener by using the Toolkit which means that one can see all events that are happening, which itself makes it rather easy to record these events and put them in an easy to use format. The format in our case would be Java code that can be copied and pasted into our test cases. To have a reasonable layer of abstraction, the recorder would filter and prepare the events and forward them to a "plugin" that is able to omit Java code. With this facility in place it was easy to record tests for Swing, however replaying them was not so easy at first. AssertJ-Swing has very concrete requirements when it comes to how it is launched and JVx has its own ideas regarding that. The solution to this was to derive a test harness, which would manage both, the AssertJ-Swing setup and also the start of the JVx application. The unit tests would then simply extend said test harness and would be able to launch the application on their own, executing the test case and then shutting it down again. This worked extremely well and we were even able to record and replay test cases for VisionX with ease.

These experiments have shown that it is very easy to perform automated GUI tests on JVx applications, no matter if Swing or Vaadin is used as frontend. The created code during these experiments is not public at the moment, as there is still quite a lot to do and it's in dire need of some refactoring. But I'm very confident that we will be able to return to it and one day provide you with the tools to easily create automated GUI tests for JVx.

JavaFX - Thee shall be my nemesis

JavaFX has been in the works for quite some time now and has been coming along greatly. It was overdue that JVx does support JavaFX as a frontend and with the beginning of this year the work started to make that happen.

Event though JavaFX has been coming along greatly, it is still quite rough around the edges and contains bugs that we needed to work around and some missing features. Currently we are monitoring over 40 requests in the bug tracker of JavaFX that are either directly or indirectly related to our JVx implementation,  a little less than half of these were reported by us. Those span little bugs to quite big feature requests, some more critical than others.

One of the biggest missing features was the missing MDI system, which we simply need for ERP applications. There is RT-22856 which is concerned with creating such a system, but that hasn't moved in quite some time and so we set out to create our own. Our efforts are quite well covered by the blog post I linked to above, we managed to come up with a very nice implementation:

  • The FXInternalWindow can either be used with or without a desktop pane. They might as well live in a normal Pane (or any other).
  • The content of internal windows is not limited in any way (you could also nest another desktop pane into it).
  • The FXDesktopPane does offer various convenience features, like a pluggable window manager system.
  • Implementing a new and custom window manager is a breeze, there are already two implementations provided by us: The FXDesktopWindowManager and the FXTabWindowManager.

But the MDI system is not the only system or control that we created to aid in our implementation of JVx on top of JavaFX, there is a long list of controls that were created by us:

But one thing that really drove me up the wall several times was the usage of the "final" keyword and "package-private" modifiers. Multiple times problems could have been easily solved by overriding a method, which turned out to be final, or invoking a method or accessing a variable, which turned out to the be (package) private. Good API design dictates that both are used to protect the API from misuse, but for our work on the JVx/JavaFX project, most of the time it stood more in the way than helped. The take away for me personally is, that of course you need to design the API in a way that limits the possibility for abuse, but you also have to allow users to shoot themselves in the foot from time to time, that means that they can experiment and use the API for possibilities it might not have been intended for.

The look ahead

And now we're in the present, phew, what a year! There are already two projects waiting for me as soon as we're done with the first version of our JVx/JavaFX implementation.

I'm looking forward for another year with the awesome people at SIB Visions!

JVx and Java 8, Events and Lambdas

Post to Twitter

With version 8, Java has received support for lambda expressions and of course also JVx and its users directly profit from this new feature.

In case you do not know what a lambda expression is, it is basically the possibility to inline functions, sometimes also called "anonymous functions" and should not be confused with "anonymous classes", which Java has supported for quite some time now. Let's look at a simple example of anonymous classes:

new Thread(new Runnable()
{
    public void run()
    {
        // your code
    }
}).start();

This launches a thread that does something. The Runnable in our example is the anonymous class. Now with the new lambda support we don't need to write the interface implementation, but can directly tell it to run a function:

new Thread(() -> { /* your code */ }).start();

The empty parentheses and the arrow indicate a new anonymous function. The parentheses contain the list of parameters of the function that is going to be invoked, in our case it is empty because Runnable.run() does not have any parameters. So the anatomy of a lambda looks like this:

new Thread((optional explicit cast to target interface)(parameters) ->
{
    function body
});

Internally this is still compiled into an anonymous class, but the code becomes a lot smoother and easier to read.

Additionally it is now possible to reference functions directly, like this:

// instance
new Thread(this::worker).start();
// static
new Thread(YourWorkClass::worker).start();

Especially the last possibility is very exciting, as our event system has a very similar mechanism but until recently did not have compile-time safety and checks.

The new lambda feature is backwards compatible and can easily be used in JVx. Here is a simple test window that shows the new lambda expressions in combination with JVx events:

import javax.rad.genui.component.UIButton;
import javax.rad.genui.container.UIFrame;
import javax.rad.genui.layout.UIBorderLayout;
import javax.rad.ui.event.UIActionEvent;

public class LambdaShowcaseFrame extends UIFrame
{
    public LambdaShowcaseFrame()  
    {
        initializeUI();
    }

    public void doJVxAction(UIActionEvent pActionEvent)
    {
        System.out.println("JVx");
    }

    public void doLambda2(UIActionEvent pActionEvent)
    {
        System.out.println("Lambda 2");
    }

    private void initializeUI()
    {
        //before Java 8
        UIButton buttonJVx = new UIButton("JVx Style");
        buttonJVx.eventAction().addListener(this, "doJVxAction");

        // with Java 8

        UIButton buttonLambda1 = new UIButton("Lambda 1");
        buttonLambda1.eventAction().
                addListener(pActionEvent -> System.out.println("Lambda 1"));

        UIButton buttonLambda2 = new UIButton("Lambda 2");
        buttonLambda2.eventAction().addListener(this::doLambda2);

        setLayout(new UIBorderLayout());
        add(buttonJVx, UIBorderLayout.NORTH);
        add(buttonLambda1, UIBorderLayout.WEST);
        add(buttonLambda2, UIBorderLayout.EAST);
    }
}

Stress testing a Vaadin application with Selenium Grid, a summary

Post to Twitter

For quite some time there has been the idea in our heads to run a stress test against a JVx/Vaadin installation to see if there are any major problems with performance and multithreading. We knew that Tomcat, Vaadin and JVx could serve a lot of users simultaneously, that's what all three were designed to do, that's what all three do well in many different environments, but we never really tried to stress it out. So we set out to change that.

The plan

Some weeks ago we started discussing and outlining how to perform such a stress test. After some research we came up with a plan:

  1. Design a test case using Selenium for the DemoERP application
  2. If necessary, design and create solutions to allow that
  3. Set up a Selenium Grid which can be used for testing
  4. Perform the actual test and fix anything that needs to be fixed

As simple as it sounds, it had quite some gotchas on the way and taught us some valuable lessons.

Name all the things

The first thing you need for automated testing are names, components need names so that they are easily identifiable by whatever you're using for testing. Already back in September we started to name components in a unique and easily recognizable fashion, our efforts are covered by ticket #1103. Of course you can still set your own names, if you want that, as the default names will only be applied if no other name is set.

In short, components are now receiving a name based on their position in the component tree. For example take this simple workscreen:

Class                 Name

WorkScreen:           WS-XX
    SplitPanel:       WS-XX_SP1
        Panel:        WS-XX_SP1_P1
            Label:    WS-XX_SP1_P1_L1
            Editor:   WS-XX_E_COLUMN
        Panel:        WS-XX_SP1_P2
            Label:    WS-XX_SP1_P2_L1
            Button:   WS-XX_B_DOPRINT

The naming starts with the "root" component and every child component does append it's name. So you can easily guess where a component is assigned to, simply from its name. Every name is guaranteed to be unique, either by its position in the tree or by incrementing a counter.

Additionally there are "special snowflakes" which are getting better names, like the name of a button which is only prefixed with the name of the workscreen and a "B" and then followed by the actions which are assigned to it, e.g.

public class DashboardWorkScreen extends ...
{
    ...
    public void initializeUI()
    {
        ...
        UIButton butUp = new UIButton("UP");
        butUp.setSize(80, 80);
        butUp.setStyle(new Style("up"));
        butUp.eventAction().addListener(this, "doUp");
    }
}

The button "butUp" will get the name Das-WV_B_DOUP. The prefix Das is the short name for the DashboardWorkScreen, followed by -WV which is like a prefix checksum/number. The letter B stands for Button and the action name DOUP is the last part.

With such a naming scheme in place, it's easy to create automated UI tests as every component is uniquely identifiable in the component tree. In the case of Vaadin the names are also used as ID's on the elements

<button type="button" class="v-nativebutton v-widget up v-nativebutton-up daswv_b_doup v-nativebutton-daswv_b_doup default-padding v-nativebutton-default-padding v-has-width v-has-height" id="Das-WV_B_DOUP" tabindex="0" style="width: 80px; height: 80px;">
  <span class="v-nativebutton-caption">UP</span>
</button>

The test case

For everyone who does not know our DemoERP application, it is a simple ERP application which features customer, product, offer and order management together with a nice statistic screen. All this can either be run as Swing application or (powered by a Tomcat Server) as Vaadin application directly in the browser.

The test case that we designed is rather simple:

  1. Load the application website and perform the initial login
  2. Open the customer management screen and add a new customer
  3. Open the offer management screen and search for an offer
  4. Switch to the order of said offer and edit it
  5. Open the statistic screen
  6. Logout

This gives a nice workload for testing, as it touches close to all areas of the application, changes values nearly simultaneous and also adds data with every run. Also note that the testcase would perform these steps as fast as possible, without any artificial waiting in between.

Thanks to the Selenium IDE the test case was recorded within a matter of minutes, and quite fast copied into a unit test. The first round of testing happened with the Chromium WebDriver, which allowed us to watch the test case do its work and make tweaks wherever necessary to allow the test to succeed.

Setting up Selenium Grid

Selenium Grid (or Grid2) is a simple, scalable framework for distributed Selenium tests, allowing to control multiple browsers on different platforms. The central server is called a "hub" and the clients are called "nodes". The nodes are registering themselves at the hub, and from there a client (for example a unit test) can send a request to the hub and it will receive a node matching the requested requirements. On the client side this is happening by simply creating a new instance of a RemoteWebDriver provided by Selenium. It will automatically contact the hub, wait for a node to be ready for it and then forward all commands to that node.

This is useful if you want quickly test an application in multiple browsers on multiple platforms without leaving your IDE or to perform a simple stress test.

Setting it up is as easy as starting the hub on one machine, and starting the nodes on any other machines. We decided to go with PhantomJS as browser, as we intended to run many instances of said browser on one node and we also had a wide variety of machines, some without a GUI. PhantomJS does already have it's own built-in node, so we just needed to start PhantomJS on the machines and tell it where it can find the hub.

At the end of day we had 12 physical machines in place and we maxed them out to get a total of 82 nodes which can be used for our stress testing.

Sessions

One thing we noticed very quickly was that lingering sessions can be a real issue and the timeouts really need to be tailored to your environment. While testing we quickly noticed that sessions kept lingering on, this was less an issue for the server and Tomcat, as more for our MariaDB installation because at some point it simply stopped accepting new connections. So we needed to reduce the amount of time a session would linger on to not clog our test server.

There are two important settings controlling the amount of time a session will linger on in such a setup:

  1. The timeout setting of the Tomcat server
  2. The heartbeat interval and closing of idle sessions in Vaadin

As said, these settings should be tailored to whatever environment the server is running in. In our test environment it didn't make any sense to keep sessions longer than a few minutes, because if the test case ended or was interrupted for some reason, the session would not resume. Armed with that knowledge we reduced the amount of time a session could linger on and we could finally start the real test.

Running the test

The server in our case would be a rather unspectacular laptop, with a Core-i7 (1.80GHz) and 16 GB of RAM. Tomcat 6 would deliver the application and the data would be provided by a local MariaDB/MySQL database (with an InnoDB engine).

The test itself, which ran several times for several hours, was rather boring to be honest. 82 clients swarmed the server as fast as it could answer requests without any noticeable effect on it's performance. The laptop, despite not being the typical server machine and for sure not intended as such, could easily handle the load that 82 clients produced. We did expect the application to stay in a usable state, however we did not expect that it wouldn't have any noticeable effect at all from a users perspective.

Here is a short summary of the data we gathered (using jvisualvm and ProcessHacker):

  • CPU peaked at 60%, but stood most of the time between 40% and 50%
  • Tomcat peaked at 4GB RAM, but would have been able to work with as little as 2GB
  • Over 21.000 datasets were created during the test run
  • The GUI was not noticeably slower during the test

Our setup could handle that amount of requests with ease. Unfortunately we could not get a hold of more machines to add further nodes for now.

We broke it, we fixed it

During the tests two nasty looking issues stood out, which kept happening in roughly 1% of all test runs with seemingly no pattern behind them.

The first was a simple NullPointerException which kept happening in UIImage, but that was fixed rather easily as certain methods where not synchronized and so destined to fail.

The second was a set of Exceptions which send us to our toes: NullPointerExceptions, "Object 'offer' was not found" and "Remote storage returned 15 value(s) but 22 were expected". Clearly there was something very wrong, and with the low volume with which these exceptions occurred, it could only be a race condition somewhere. Looking for possible causes send us to all corners of the codebase without any clue in sight what might cause them.

It only dawned upon us when we got a hold of the map which did not contain the object "offer", it only contained the objects "v_statistic_order_offer_year" and "v_statistic_order_offer_month". Looking at the lifecycle objects immediately showed the problem, the session did have a completely different lifecycle object. That did also explain the wrong amount of values, as two lifecycle objects happened to contain identical named objects. But how was that possible? We immediately checked the complete source code associated with creating a session, and in AbstractSession we found the culprit. Even though the complete code for creating a session was properly synchronized where necessary, one simple statement had slipped past:

private static long lSessionCount = 0;
...
private final Long lObjectId = Long.valueOf(++lSessionCount);

For everyone not seeing the problem, here is a simplified explanation. If an object is instantiated, the fields are initialized even before the constructor is run, this is not and can't be directly synchronized. A simple increment operation (++) is compiled to six operations:

  1. Load a reference to a variable onto the stack
  2. Put into that reference the value of the static field (in our case lSessionCount)
  3. Put the value "1" onto the stack
  4. Add both values on the stack together
  5. Duplicate the values
  6. Put the new value into the static field

Six instructions, a lot of time for two threads to overtake each other and cause problems. Instantiating two sessions at once had the potential to give both the same ID. Of course we immediately fixed it and applied some other optimizations.

Another run verified that this had fixed all three issues at once and the tests were running without any further incidents for hours.

Conclusion

We were not even close to pushing the limits of what a JVx application/server can handle, and judging from the performance of the server I can only call our test setup "humble". Still it gave us a good measure of what is for sure possible and showed us that we need a bigger setup to really push the limits.

During the tests we also created various utilities to aid us with testing, unfortunately they are in a rather barebone state and can not be released by now. Also there is now a more important project awaiting, one that we think you've been waiting for for far too long.