A case study of the usage of Gradle in the Ratpack web framework. First, we'll examine the Ratpack Gradle plugins, including their functionality, implementation, and testing. Next, we'll examine the build script for the Ratpack project itself. Here, we'll discuss various details of the project's build, including handling multiple projects, multiple types of testing, support for multiple styles of target hardware (developer workstations, cloud CI), and more. For each, we'll go over the desired behavior, how it was achieved, and why it was necessary.
Powerful Google developer tools for immediate impact! (2023-24 C)
The Gradle in Ratpack: Dissected
1. The Gradle in
Ratpack: Dissected
Presented by David Carr
Gradle Summit 2016 - June 22
GitHub: davidmc24
Twitter: @varzof
2. Who Am I?
• Systems Architect at
CommerceHub
• Java/Groovy Developer
• Gradle Enthusiast
• Ratpack Contributor
3. What is Ratpack?
• A web framework
• A set of Java libraries
• Purely a runtime
• No installable package
• No coupled/required build tooling
• Gradle recommended; plugins provided
7. gradle run
$ gradle run
:compileJava UP-TO-DATE
:compileGroovy UP-TO-DATE
:processResources UP-TO-DATE
:classes UP-TO-DATE
:configureRun
:run
[main] INFO ratpack.server.RatpackServer -
Starting server...
[main] INFO ratpack.server.RatpackServer -
Building registry...
[main] INFO ratpack.server.RatpackServer -
Ratpack started (development) for
http://localhost:5050
8. gradle run --continuous
$ gradle run --continuous --quiet
[main] INFO ratpack.server.RatpackServer - Starting server...
[main] INFO ratpack.server.RatpackServer - Building
registry...
[main] INFO ratpack.server.RatpackServer - Ratpack started
(development) for http://localhost:5050
[main] INFO ratpack.server.RatpackServer - Stopping server...
[main] INFO ratpack.server.RatpackServer - Server stopped.
[main] INFO ratpack.server.RatpackServer - Starting server...
[main] INFO ratpack.server.RatpackServer - Building
registry...
[main] INFO ratpack.server.RatpackServer - Ratpack started
(development) for http://localhost:5050
$
26. FunctionalSpec
int scrapePort(Process process) {
int port = -1
def latch = new CountDownLatch(1)
Thread.start {
process.errorStream.eachLine { String line ->
System.err.println(line)
if (latch.count) {
if (line.contains("Ratpack started for http://localhost:")) {
def matcher = (line =~ "http://localhost:(d+)")
port = matcher[0][1].toString().toInteger()
latch.countDown()
}
}
}
}
if (!latch.await(15, TimeUnit.SECONDS)) {
throw new RuntimeException("Timeout waiting for application to start")
}
port
}
32. Java Settings
def jvmEncoding =
java.nio.charset.Charset.defaultCharset().name()
if (jvmEncoding != "UTF-8") {
throw new IllegalStateException(
"Build environment must be UTF-8 (it is:
$jvmEncoding) - add '-Dfile.encoding=UTF-8'
to the GRADLE_OPTS environment variable ")
}
if (!JavaVersion.current().java8Compatible) {
throw new IllegalStateException(
"Must be built with Java 8 or higher")
}
Good morning, everyone. Today we're going to be examining the usage of Gradle in the Ratpack open source project. If you have any questions, feel free to to ask throughout the talk.
Before we get started, let's cover some background. I'm David Carr. I'm a Systems Architect at CommerceHub, where I've been working with Java and Groovy for many years now. I'm also a contributor to the Ratpack project.
So, what *is* Ratpack exactly? In terms of its basic function, it serves as a web framework. Unlike some other frameworks, however, it operates solely as a set of Java libraries you compile and run against. You don't "install" ratpack, you just build an app using it, just like you would any other library.
Due to this, the Ratpack project itself is a great reference example of a non-trivial multi-project Gradle build. Similarly, Ratpack's Gradle plugin is a useful example of Gradle plugin techniques.
So, what does a minimal Ratpack project using the ratpack-groovy plugin look like?
Here's an example build.gradle file. First we declare that we want to use the plugin. Then, we tell it where to resolve artifacts from. Then, we give it a logging library... not because it's strictly necessary, but but because I hate "no SLF4J binding found" warnings.
If we run it now, it will tell us "No ratpack.groovy found"... so let's create one.
For a Groovy-based Ratpack application, this is the entry point for the application logic. In this case, we keep it really simple... respond to all requests with "Hello world".
Great! Let's run it!
So, we run "gradle run"... and we have a running app.
That's great and all... but we've got apps to write. We can't be spending all day restarting builds to get test updates. So let's have it automatically build and restart for us!
If we run it with the --continuous option, when we change a source file, it will automatically re-build the application and re-start the app.
You may have noticed that there are several things you'd usually need that were missing from this process. There weren't any dependency declarations for Groovy, or Ratpack itself, nor a specification for how to run the application. Those are all configured for us by the plugin. Let's start taking a look at how that works.
It's a general good practice to separate capability-based plugins from convention-based plugins. Here, you see Ratpack's capability-based plugin, which just installs an extension. We'll take a look at the body of the extension in a moment.
The Ratpack plugin builds on top of the base plugin with conventions. First we ensure that the Gradle version is high enough.
Then, we apply plugins used by the remainder of the plugin.
Next, we add the src/ratpack directory as a resource directory in the main source set.
After that, we add some dependencies to the project, so that the user doesn't need to in their build.gradle. For Java projects, we add the Ratpack core and test modules. Note that we're accessing methods on the extension object.
The extension keeps track of the src/ratpack directory (which we saw used in the RatpackPlugin), as well as retaining a reference to the project's dependency handler.
It uses the dependency handler to provide utility methods such as "core", and "test", as well as support for arbitrary dependencies. All of them resolve to creating a dependency for the specified Ratpack module at the same version as the Ratpack plugin.
Back to the Ratpack plugin, we tweak the "run" task provided by the Application plugin, and specify an additional system property. This signals the application to run in development mode.
There's also some logic in here to tie into Gradle's continuous build lifecycle, but I'm not going to cover that in detail as it's currently still incubating, and we're using internal APIs to get it to work the way we want.
In the Ratpack Groovy plugin, we leverage all of the functionality configured in the Ratpack plugin, plus we apply the Gradle core Groovy plugin, and add a default entry point for the application, adding support for the ratpack.groovy script file we saw in the example app.
We also add some additional dependencies; the ratpack groovy library and the ratpack groovy test library.
Ratpack's plugin testing is based around Spock and the Gradle TestKit. TestKit is currently focussed on functional testing.
All of the logic for interacting with TestKit to interact with Gradle is isolated in a FunctionalSpec abstract class, while the test specs focus on the contents of build files, running tasks, and the expected results.
This test just verifies that the dependency declaration functionality on the extension works as expected.
Here's a test that builds and starts a Ratpack app. First, we create the Ratpack groovy script.
Then, we run the build to install it, run the resulting start script, and figure out what port it ran on. Once it's up, we assert on the expected URL's contents, and then clean up the test.
Here, we have the base class. It defines utility methods to wrap the TestKit, running Gradle tasks with either expected success or failure.
We have some utility methods to easily create files within the test's project directory.
In the setup method, we set up a basic build file, which individual tests can append to. The ratpack version is loaded from a private repository on the local disk, since it's likely that you'll be running the tests with a local snapshot version.
To scrape the port of running processes, we start a thread to consume stderr, and return the parsed port. If the expected expression isn't matched within a timeout, we consider it a test failure.
Now that we've examined the Ratpack Gradle plugins, let's move on to the Ratpack project's own build.
The Ratpack project is currently composed of 28 separate modules. Some are the main Ratpack libraries, while others are integrations with third-party libraries, and others are used for project-internal purposes, such as performance testing the project, generating the Ratpack documentation and running the project website.
When you have that many build files, having them all be "build.gradle" gets in the way. Instead, we use this snippet in settings.gradle to have the build script for each sub-project named after the project. Thus, we have a ratpack-core.gradle and a ratpack-guice.gradle. In this way, we can easily use IntelliJ's "Navigate to File" support to jump right to the desired build script.
As your project gets bigger, it gets more difficult to manage your build logic. Sometimes it can be useful to separate out concepts into their own build script file, so that you can easily find that portion of the build logic. In a multi-project build, you may also have re-use to consider, as multiple sub-projects may require the same logic.
The approach used in the Ratpack project is to have a "gradle" directory in the root of the project, and store extracted build scripts there. These can then be used as needed with an "apply from" statement.
The Ratpack Gradle plugins need to know what version of Ratpack to use at build time, and Ratpack Core exposes the current version at runtime. Both of these could be accomplished by hard-coding the version in a class, or manually updating a resource file, but those approaches aren't particularly elegant, and are prone to humans forgetting to update them.
Instead, we introduce a Gradle task that writes a version file resource automatically based on the version of the Gradle project. By properly declaring the inputs and outputs, we ensure that Gradle's incremental build support works as expected when the project version changes.
We want the file to be present regardless of whether you're building Ratpack via Gradle or in an IDE, so we have a few tasks depend on the writeVersionNumberFile task.
Similar to the version number file, we need a resource available at runtime with the version of Groovy expected for Ratpack, so that we can verify that the version on the classpath is modern enough to be supported. Here the basic technique is the same, but instead of adding a new task, we add a new action to the existing processResources task.
It can be tricky to ensure a consistent build environment. This is especially the case in open source projects, where contributors may have very diverse hardware and software configurations.
When possible, it's desirable to automate detection and enforcement of any known requirements in the build itself. We use the Gradle wrapper to control the version and installation of Gradle. For the file encoding and Java version used at build-time, we check them in the build, and fail if they aren't acceptable. By failing early with a clear message, we avoid harder to diagnose errors later on in the build process.
Sometimes, it may be useful to work offline, such as in an airplane. Gradle has a "--offline" option that tells Gradle to load dependencies from its cache rather than checking for updates over the network. However, that only works if you have a cached version of the artifacts before you go offline. You could run the tasks that you use to build, test, etc. to ensure that the required artifacts are present, but that can take a while, and only caches the artifacts for the configurations you happen to use in those tasks. This approach allows easily downloading the artifacts for all potential tasks, as long as they use Gradle configurations for dependency resolution.
In the Ratpack build, there are some versions that need to be referenced multiple times.
For example, the version of SLF4J is used as a compile dependency in ratpack-core and as a token when assembling the documentation. It's also used to load compatible versions of SLF4J bridge and backend libraries in several modules. We declare the version as an extension property, which can then be accessed elsewhere in the build.
In addition to version numbers being re-used, sometimes specific dependencies are re-used. In addition to reducing redundancy and giving a single point to update if the artifact ID changes, this also allows for centralizing the declaration of transitive dependency exclusions or dependencies that should always be used together.
Another advantage of having a central declaration of dependencies is that it provides a central place to update. In this case, the Groovy team runs a CI server to check the compatibility of new versions of Groovy with widely used community projects. When that's happening, we want to run against the specified version of Groovy, rather than the one we normally would use. If the specified version is a snapshot, we add an additional repository for dependency resolution, as the repositories we normally use don't store snapshots.
Ratpack currently uses SnapCI for our continuous integration builds. However, that wasn't always the case. When I started working on the project, we were on Travis CI, and we tried several others after that. The Ratpack build leverages parallelism when available to improve performance. On my laptop, a clean build takes around 5 minutes, using all 4 cores.
However, due to the parallelism, it can easily hit memory or CPU limitations on cloud CI providers. In order to handle both cases, powerful developer machines and resource-constrained CI environments, we first need to detect which situation currently applies. We do this by checking environment variables set by cloud CI providers, and storing extension properties based on the analysis.
Then, we're free to tweak settings based on them. In some cases, we apply the change all the time; in others, we only apply it when running in CI. To control memory usage, we've explicitly stated heap sizes for most tasks that fork a JVM. In addition, we explicitly reduced the stack trace size when running tests, as running larger numbers of threads with large stack traces can bloat memory usage.
In the ratpack build, there's actually 3 levels of parallelism potentially at play.
The Gradle Test task allows for multiple tests to run concurrently. Here, we've set it to run up to three test processes at a time... but restrict it to only one at a time when running in CI.
Second, there's project parallelism, which is an incubating feature. That's enabled via the --parallel option to Gradle on the command-line, and is only applicable to multi-project builds. Effectively, if projects don't have any dependencies on each other, it can run them in parallel with each other, restricted by the --max-workers setting, which defaults to the number of available processors.
The third form of parallelism is intra-project parallelism. This is currently an incubating, undocumented capability. Here, we enable it by setting an undocumented system property. You can confirm that it's working by running the build with --info; a different message will be logged if you're using intra-project parallelism. In this mode, it will run tasks within a project in parallel with each other.
So... that's all I've got for today. I hope you found at least something informative or interesting. I think we have time for a couple questions. If there's something we don't get to, I'll be around for the rest of the conference; feel free to come up to me to chat.