So long ago since my last post but be sure I have not been devoided of thoughts since then (have seen the blog title ? ;-)). Just a lack of time and energy to write things down…
I resume today with blogging with a Spring Roo plugin I finished last week. For those that didn’t know about Spring Roo : it’s a productivity tool helping you bootstrapping a Spring application within seconds. And although excitement seems to be more around Spring Boot these days, I found Roo to be a valuable tool for a developper toolbox… Anymway, Roo comes with many plugins allowing you to chose your persistence layer and APIs : typically JPA based or MongoDB based.
I’ve started some months ago this plugin allowing you to have a persistence layer based on Elasticsearch. The idea here is to have your domain objects directly persisted into an Elasticsearch index and – thanks to the conventions of Roo – quickly having a CRUD service layer and scaffolded web screens directly generates for us. After a little contribution to Spring Data for Elasticsearch (here), the plugin was on its way and is now hosted here on Github.
Twitter example development
The plugin is not yet released to official Spring Roo repository to installation is a bit teadious… The
README.md on Githud explains how to do that so I won’t delve into this part. Instead, I propose to illustrate more in details the Twitter example that is used to illustrate the plugin commands.
In order to complete this tutorial, you’ll need :
- Spring Roo with plugin installed (I’ve used
- Maven installation (I’ve used
Elasticsearch installation running on port 9200 and 9300 (I’ve used
So let start with a brand new project. In a new directory, start a Roo shell and create a new project with this command :
project --topLevelPackage com.github.lbroudoux.es
This produces a bunch of configuration files as shown by screenshot above. Now, next thing to do is to activate the Elasticsearh layer plugin for Roo and setting it up for using a ES node that is non local to the JVM and hosted on
localhost:9300. You do this with this line :
elasticsearch setup --local false --clusterNodes localhost:9300
Configuration files are generated for you, dependencies (to spring-data-elasticsearch) are added for you and Spring version is updated to required one. Following step is to tell Roo you want a
Tweet domain object that will be backed by Elasticsearch. This is done through this new variation of the
entity command available in Roo :
entity elasticsearch --class ~.domain.Tweet
Tweet domain Java class is generated and followed by AspectJ ITD. You can now embellish your domain class with fields such as
content that should be limited to 140 characters length. This is done with the following based commands in Roo :
field string --fieldName author field string --fieldName content --sizeMax 140
Nothing more to say here :
Tweet class is modified. Next step is more interesting : it’s here that you’re asking the plugin to generate a Spring Data repository layer for persisting
Tweets into ES. This is done by :
repository elasticsearch --interface ~.repository.TweetRepository --entity ~.domain.Tweet
You see that a new interface
TweetRepository has been generated and that an ITD that triggers an Elasticsearch implementation proxy is also present. By now, we have to create a CRUD Service layer for our repository and its done simply using this command :
service --interface ~.service.TweetService --entity ~.domain.Tweet
TweetService interface and its implementations are generated in a way that they’re using the repository we’ve generated earlier in order to persist and retrieve
Tweet instances. Finally, in order to easily test and check the resulting application, we have to setup a web layer and generate scaffolded screens for our domain objects. This is done by sequencing these 2 commands :
web mvc setup web mvc all --package ~.web
And a bunch of web resources, controllers and configuration files are now present into our application. Development is done !
Twitter example execution
We now want to execute all of this in order to properly test our app (Yes : Roo offers many way to unit and integration test your app but a screen is more expressive, at least for a blog post ;-)).
First, in a terminal, start your Elasticsearch node on localhost. Default command will do the job, you don’t need extra configuration :
Then, from the terminal you were working with Roo shell : exit the shell and launch the Tomcat plugin executing your app. This is done with this Maven command :
After Tomcat has started up, you can now open a browser to
http://localhost:8080/es. You’ll get this screen this is the default home page for application.
From there, you can access a page allowing you to create new Tweets with the fields we have added to our domain class.
Persistence works fine and you’ll see by checking icons that every services are here for showing, updating, finding and deleting Tweets.
Twitter example validation
Then you would told me : “Ok, ok… Stuffs are persisted but how do you know that they’re persisted into Elasticsearch node ?”. A simple thing to do is to check on ES using Marvel monitoring solution (I highly recommend you to install it if not already done !). So open a new browser tab to
http://localhost:9200/_plugin/marvel/ and check the “Cluster Overview” dashboard.
You see that a new index called
tweets containing 1 document is now present. In order to check its content, you can go to the “Sense” dashboard that offers you online querying tool for your indices :
Now, you see our first tweet has really been persisted into Elasticsearch !
So I have demonstrated you how to write a full-blown Spring application that :
- Persist and retrieve its domain object into Elasticsearch,
- Is correctly architectured with a repository layer and a service layer,
- Presents basic administrative web frontend,
in no more than 9 lines of Roo commands ! Wouah !
Much much more over this basic persistence stuffs, we’re able – as a developer – to build cool apps using the powerful indexing and querying features of Elasticsearch easily. Just consider this tutorial has a quick-starter and think about : full-text search, geo query, analytics and aggregates on various fields of your Tweets … it is close to hand !