Jump into Java microframeworks, Part 3: Spark

07.01.2016
Spark makes fewer assumptions than the other microframeworks introduced in this short series, and is also the most lightweight of the three stacks. Spark makes pure simplicity of request handling, and it supports a variety of view templates. In Part 1 you set up a Spark project in your Eclipse development environment, loaded some dependencies via Maven, and learned Spark programming basics with a simple example. Now we'll extend the Spark Person application, adding persistence and other capabilities that you would expect from a production-ready web app.

If you followed my introduction to Ninja, then you'll recall that Ninja uses Guice for persistence instrumentation, with JPA/Hibernate being the default choice. Spark makes no such assumptions about the persistence layer. You can choose from a wide range of options, including JDBC, eBean, and JPA. In this case, we'll use JDBC, which I'm choosing for its openness (it won't limit our choice of database) and scalability. As I did with the Ninja example app, I'm using a MariaDB instance on localhost. Listing 1 shows the database schema for the Person application that we started developing in Part 1.

CRUD (create, read, update, delete) capabilities are the heart of object-oriented persistence, so we'll begin by setting up the Person app's create-person functionality. Instead of coding the CRUD operations straightaway, we'll start with some back-end infrastructure. Listing 2 shows a basic DAO layer interface for Spark.

Next we'll add the JdbcDAO implementation. For now we're just blocking out a stub that accepts a map of data and returns success. Later we'll use that data to define the entity fields.

We'll also need a Controller class that takes the DAO as an argument. The Controller in Listing 4 is a stub that returns a JSON string describing success or failure.

Now we can reference the new controller and DAO layers in App.java, the main class for our Spark application:

Notice the line in Listing 5 that is commented with the number 1. You'll recall from Part 1 that this line is how we handle a route in Spark. In the route-handler lambda, we just access the App.controller member (given that lambdas have full access to the enclosing class context), then call the addPerson() method. We pass in the request body via req.body(). A JSON request body will be expected in our request, and that body should contain the fields for the new Person entity.

If we now hit the POST /person URL (using Postman, which I introduced in Part 2) we'll get a message back indicating success. Postman shows us what a response message would look like, but it's empty of real content. For that we need to populate our database.

We'll use JdbcDAO to add a row or two to our database. To set this up, we first need to add some items to pom.xml, the application's Maven dependency file. The updated POM in Listing 6 includes a MySQL JDBC implementation, Apache DBUtils, and a simple wrapper library so that we don't have to manage the JDBC ourselves. I've also included Boon, a JSON project that is reputed to be the fastest way to process JSON in Java. If you're familiar with Jackson or GSON, Boon does the same thing with a similar syntax. We'll put Boon to use shortly. The POM updates are shown in Listing 6.

Now, change JdbcDAO to look like Listing 7. The addPerson() will take the first_name and last_name values from the map argument and use them to insert a Person into the database.

In Listing 7 we obtained a JDBC dataSource instance, which we'll use when connecting to the database instance running on localhost. In a true production scenario we'd need to do something about connection pooling, but we'll side-step that for the present. (Note that you'll want to change the root and password placeholders above to something unique for your own implementation.)

Now let's return to the controller and update it. The updated controller shown in Listing 8 takes a String and modifies it into a Map, which can be passed to the DAO. We'll see how Boon lives up to its name here, because the String argument will be a bit of JSON from the UI. Listing 8 has the controller updates.

The line marked 1 creates a mapper that we can use to convert JSON (it's a class member -- this ObjectMapper is designed to be reused). The line marked 2 uses the mapper to parse the string into a Java Map. Finally, in line 3, the map is passed into the DAO.

Now if we send a POST request with the body, our new Person will be added to the database. Remember that the primary key is an auto-increment field, so that isn't shown.

Here's the request displayed in Postman:

So far I've demonstrated a dynamically typed approach to creating the Spark data layer, modeling with maps of data rather than explicitly defined classes. If we wanted to push further in the dynamic direction, we could insert a single add(String type, Map data) method in the DAO, which would programmatically persist a given type. For this approach we'd need to write a layer to map from Java to SQL types.

The more common approach to persistence is to use model classes, so let's take a quick look at how that would work in Spark. Then we'll wrap up the remaining Person CRUD.

For a more traditional, statically typed approach to the data layer, we start by adding a Person class to the original stub application, as seen in Listing 10. This will be our model.

The @JsonProperty annotation in Listing 10 tells Boon to convert JSON's underscore format (which corresponds to HTML document fields) to the camel-cased fields of a Java class. If you're familiar with Jackson, you'll observe that Boon has borrowed some of its annotations. Also notice the modified addPerson() method on the controller below. It shows how the JSON String is converted in the object.

In this case we aren't doing anything but persisting the Person object, but we can now use the model instance in whatever business logic we please. In Listing 12 I've updated the JdbcDAO.addPerson() method to use the Person class. The difference here is that the first and last names are now pulled from the Person getters, rather than from the Map used in Listing 7.

The Person application's request processing infrastructure now consists of three layers, successively converting request data from JSON, to a Map, to a Java class.

We have our model and a way to persist it. Next we'll begin developing a UI to save and view objects in the database. In Spark, this means adding static JavaScript resources to use in the template.html page.

To start, create a src/main/resources/public folder to hold the new resources, as shown in Figure 2.

For our JavaScript tool we'll use jQuery, which is especially useful for Ajax and DOM handling. If you don't have it already, download the latest version of jQuery (2.1.4 as of this writing) and place it in your new public folder. Another option is to create a file in public and copy the jQuery source into it, or you can actually download the file and copy it into the directory.

Next, using the same process, add the Serialize Object jQuery plugin. This plugin will manage the process of converting the HTML form into a JSON format that the server can understand. (Recall that the addPerson() method from Listing 8 expects a JSON string.)

Finally, add a file called app.js into the same directory. As you can see in Listing 13, app.js contains simple controls for the template.html.

app.js produces an App object, which contains methods we'll use to interact with server-side REST services.

Finally, we need to tell Spark about our /public directory. Do this by going to App.java, in the main() method, and adding the code from Listing 14, below. Be sure to update App.java before you define any routes! This will tell Spark to map the application's assets directory, allowing browser requests access to public resources like JavaScript files.

We defined addPerson at the beginning of this tutorial, so it's all set to be used by the App.addPerson() method. Next we'll create the /people GET endpoint for App.loadPeople.

Start by mapping the /people path in App.java, as shown in Listing 15.

Next add the loadPeople() method to the controller, as shown in Listing 16.

The loadPeople() method in Listing 16 uses Boon's JSON mapper to convert whatever dao.loadPeople returns into JSON. Note that we've also taken the request body as an argument. We won't do anything with it for now, but it's there if we need it later -- for example, if we wanted to add search parameters to the application.

Listing 17 is the JdbcDAO implementation of loadPeople(). Remember that we'll also need to add the loadPeople() to the DAO interface.

JdbcDAO.loadPeople leverages DBUtils again, this time to issue the query, and also to convert the SQL resultset into a List of Persons. The SQL conversion is handled by passing in the ResultSetHandler to the query() method. You can see the definition for the ResultSetHandler in the line commented with the number 1. Also note the use of generics and the argument to specify the type of results we want back. DBUtils provides several useful handlers like this one.

At this point, we can test out the loadPerson endpoint in Postman, by sending a GET to /people. What you'll find is that it almost works: we get back our rows, but they only have the IDs, no first and last name fields. Figure 3 shows Postman returning a JSON array with only the ID fields.

For the Person names to be properly listed, we have to convert the SQL-style database fields (first_name) into the JavaBeans style (firstName). This is exactly analogous to when we configured Boon to convert from underscore to camel-case on the front-end. Fortunately, DBUtils makes the conversion easy; just swap line 1 in Listing 17 for what's shown in Listing 18. The DBUtils GenerousBeanProcessor accepts underscore-separated names.

The passed-in RowProcessor customizes our conversion, and GenerousBeanProcessor transforms our first_name to firstName. With these changes, a Postman test on /people should return the name fields we're looking for.

Figure 4 shows off the new response in Postman, which now shows a request after a couple of country music legends have been added to the database (note that this is old country). You'll see the people listed in the response as a JSON array, including the first and last name fields.

We'll finish up the basic Person app by adding a few more UI elements to template.html, as shown in Listing 19.

Listing 19 includes the JavaScript resources that we mapped in Listing 14, and uses the methods from our JavaScript main object, App, which we defined in Listing 13. When the page first loads, our jQuery onload handler will display all Persons in the database. The form fields will be sent in a JSON body to the addPerson service that we tested with Postman. Upon returning, the form will automatically refresh the person list.

We've covered a lot of ground, and have the basics in place for a Person application with persistence and a functional UI. In addition to Spark's core infrastructure, we've used DBUtils, Boon, and jQuery to wire together the application's data layer and UI.

For our last experiment with Spark, let's add login support to the Person app. This will let a user login, save their session info, and check for authorization; all important steps toward a more secure app.

Listing 20 shows the initial updates to the template.html file. The new loginForm will allow the user to enter his or her username and password and use buttons to login or logout. We'll use jQuery to submit the login data via Ajax.

We add the login functionality to app.js similarly to how we added the addPerson feature in Listing 13. We'll just use a serializeObject to pull the fields from the loginForm and submit them as JSON.

Listing 21 shows the App object methods that have been updated from Listing 13. Note that in the startup method, we add a click-event handler for the login button and introduce the login() method to handle the click.

Next we add a login handler, starting with this new line in App.java:

We then update our controller, as shown in Listing 22:

You'll note that the above login method is silly, with no real authentication logic. What it does do, is to show off Spark's session management API. Notice that we passed in the actual request object to the controller, and used that to add the username to the session. Now let's make use of the "authenticated" user. Listing 23 has our authorization credential check, which is added to App.java.

Listing 23 wouldn't cut it for a real-world application, but it demonstrates Spark's implementation of filters, which we've used to handle authorization. In this case, we add a before filter and check to see whether the user is logged in. If the user isn't logged in, they won't be allowed to submit any post with "Hendrix" in it. This ensures that unauthorized users won't be able to mess with Jimi.

You can verify the authorization mechanism by attempting to create your own Jimi Hendrix Person in the UI without a login, and then again with one.

Something else to note in Listing 23 is the Spark.halt API. This API returns an HTTP error with a specified status code; in this case it's 401, unauthorized.

Spark's API is so lean that we've covered a good percentage of its functionality in this short tutorial. If you've followed the example application since Part 1, then you've set up Spark's service and persistence layers, built a basic UI, and seen enough of Spark's authorization and authentication services to understand how they work. For very small projects, Spark is a clear winner. It makes mapping endpoints dead simple, while introducing no obstacles to building out a larger infrastructure. For a use case where you definitely wanted to use JPA and work in an IoC container, Ninja might be a better choice. But for great flexibility with a lean footprint (even on large projects) Spark is a very good bet. Of the great wealth of open-source excellence available to us as Java developers, Spark is another fine entry.

Stay tuned for the final article in this series, an in-depth introduction to Play!

(www.javaworld.com)

By Matthew Tyson

Zur Startseite