Java Resources – DZone
Events
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JSON Handling With GSON in Java With OOP Essence
Exploring Throttling in Java: Simple Implementation Examples – Part 1
In the realm of Java development, optimizing the performance of applications remains an ongoing pursuit. Profile-Guided Optimization (PGO) stands as a potent technique capable of substantially enhancing the efficiency of your Java programs. By harnessing runtime profiling data, PGO empowers developers to fine-tune their code and apply optimizations that align with their application’s real-world usage patterns. This article delves into the intricacies of PGO within the Java context, providing practical examples to illustrate its efficacy. Understanding Profile-Guided Optimization (PGO) Profile-Guided Optimization (PGO) is an optimization technique that uses runtime profiling information to make informed decisions during the compilation process. It helps the compiler optimize code paths that are frequently executed while avoiding unnecessary optimizations for less-used paths. To grasp the essence of PGO, let’s dive into its key components and concepts: Profiling At the core of PGO lies profiling, which involves gathering runtime data about the program’s execution. Profiling instruments the code to track metrics such as method invocation frequencies, branch prediction outcomes, and memory access patterns. This collected data provides insights into the application’s actual runtime behavior. Training Runs To generate a profile, the application is executed under various real-world scenarios or training runs. These training runs simulate typical usage patterns, enabling the profiler to collect data on the program’s behavior. Profile Data The data collected during the training runs is stored in a profile database. This information encapsulates the program’s execution characteristics, offering insights into which code paths are frequently executed and which are seldom visited. Compilation During compilation, the Java Virtual Machine (JVM) or the Just-In-Time (JIT) compiler uses the profile data to guide its optimization decisions. It optimizes code paths that are frequently traversed more aggressively, potentially resulting in improved execution time or reduced memory usage. Examples of PGO in Java To illustrate the tangible benefits of Profile-Guided Optimization in Java, let’s explore a series of real-world examples. Method Inlining Method inlining is a common optimization technique in Java, and PGO can make it even more effective. Consider the following Java code: Java public class Calculator { public static int add(int a, int b) { return a + b; } public static void main(String[] args) { int result = add(5, 7); System.out.println(“Result: ” + result); } } Without PGO, the JVM might generate a separate method call for add(5, 7). However, when PGO is enabled and profiling data indicates that the add method is frequently called, the JVM can decide to inline the method, resulting in optimized code: Java public class Calculator { public static void main(String[] args) { int result = 5 + 7; System.out.println(“Result: ” + result); } } Method inlining eliminates the overhead of method calls, leading to a performance boost. Loop Unrolling Loop unrolling is another optimization that PGO can intelligently apply. Consider a Java program that calculates the sum of elements in an array: Java public class ArraySum { public static int sumArray(int[] arr) { int sum = 0; for (int i = 0; i < arr.length; i++) { sum += arr[i]; } return sum; } public static void main(String[] args) { int[] array = new int[100000]; // Initialize and fill the array for (int i = 0; i < 100000; i++) { array[i] = i; } int result = sumArray(array); System.out.println(“Sum: ” + result); } } Without PGO, the JVM would execute the loop in a straightforward manner. However, with PGO, the JVM can detect that the loop is frequently executed and choose to unroll it for improved performance: Java public class ArraySum { public static int sumArray(int[] arr) { int sum = 0; int length = arr.length; int i = 0; for (; i < length – 3; i += 4) { sum += arr[i] + arr[i + 1] + arr[i + 2] + arr[i + 3]; } for (; i < length; i++) { sum += arr[i]; } return sum; } public static void main(String[] args) { int[] array = new int[100000]; // Initialize and fill the array for (int i = 0; i < 100000; i++) { array[i] = i; } int result = sumArray(array); System.out.println(“Sum: ” + result); } } In this example, PGO’s profiling data has informed the JVM that loop unrolling is a worthwhile optimization, potentially resulting in significant performance gains. Memory Access Pattern Optimization Optimizing memory access patterns is crucial for improving the performance of data-intensive Java applications. Consider the following code snippet that processes a large array: Java public class ArraySum { public static int sumEvenIndices(int[] arr) { int sum = 0; for (int i = 0; i < arr.length; i += 2) { sum += arr[i]; } return sum; } public static void main(String[] args) { int[] array = new int[1000000]; // Initialize and fill the array for (int i = 0; i < 1000000; i++) { array[i] = i; } int result = sumEvenIndices(array); System.out.println(“Sum of even indices: ” + result); } } Without PGO, the JVM may not optimize the memory access pattern effectively. However, with profiling data, the JVM can identify the stride pattern and optimize accordingly: Java public class ArraySum { public static int sumEvenIndices(int[] arr) { int sum = 0; int length = arr.length; for (int i = 0; i < length; i += 2) { sum += arr[i]; } return sum; } public static void main(String[] args) { int[] array = new int[1000000]; // Initialize and fill the array for (int i = 0; i < 1000000; i++) { array[i] = i; } int result = sumEvenIndices(array); System.out.println(“Sum of even indices: ” + result); } } PGO can significantly enhance cache performance by aligning memory access patterns with hardware capabilities. Implementing PGO in Java Implementing PGO in Java involves a series of steps to collect profiling data, analyze it, and apply optimizations to improve your application’s performance. Below, we’ll explore these steps in greater detail. Instrumentation To initiate the PGO process, you need to instrument your Java application for profiling. There are several profiling tools available for Java, each with its features and capabilities. Some of the commonly used ones include: VisualVM: VisualVM emerges as a versatile profiling and monitoring instrument that comes bundled with the Java Development Kit (JDK). It furnishes a graphical user interface that facilitates performance monitoring and the accumulation of profiling data. YourKit: YourKit represents a commercial profiler designed explicitly for Java applications. It boasts advanced profiling features, encompassing CPU and memory analysis. The tool’s user-friendly interface streamlines the process of collecting and analyzing data. Java Flight Recorder (JFR): JFR, an integral component of the Java platform and an inclusive part of the JDK, takes the form of a low-impact profiling tool. It empowers you to amass comprehensive runtime insights about your application’s operation. Async Profiler: Async Profiler emerges as an open-source profiler tailored for Java applications. It excels in collecting data on method invocations, lock contention, and CPU utilization, all while maintaining a minimal impact on system resources. Choose a profiling tool that best fits your needs, and configure it to collect the specific profiling data that is relevant to your application’s performance bottlenecks. Profiling can include method call frequencies, memory allocation patterns, and thread behavior. Training Runs With your chosen profiling tool in place, you’ll need to execute your Java application under various representative scenarios, often referred to as “training runs.” These training runs should mimic real-world usage patterns as closely as possible. During these runs, the profiling tool gathers data about your application’s execution behavior. Consider scenarios such as: Simulating user interactions and workflows that represent common user actions. Stress testing to emulate high load conditions. Exploratory testing to cover different code paths. Load testing to assess scalability. By conducting comprehensive training runs, you can capture a wide range of runtime behaviors that your application may exhibit. Profile Data The profiling tool collects data from the training runs and stores it in a profile database or log file. This profile data is a valuable resource for understanding how your application performs in real-world scenarios. It contains information about which methods are frequently called, which code paths are executed most often, and where potential bottlenecks exist. The profile data may include metrics such as: Method invocation counts. Memory allocation and garbage collection statistics. Thread activity and synchronization details. Exception occurrence and handling. CPU and memory usage. The profile data serves as the foundation for informed optimization decisions. Compilation The Java Virtual Machine (JVM) or Just-In-Time (JIT) compiler is responsible for translating Java bytecode into native machine code. During compilation, the JVM or JIT compiler can use the profile data to guide its optimization decisions. The specific steps for enabling PGO during compilation may vary depending on the JVM implementation you’re using: HotSpot JVM: The HotSpot JVM, the most widely used Java runtime environment, supports PGO through the “tiered compilation” mechanism. It collects profiling data and uses it to guide compilation from interpreted code to fully optimized machine code. The -XX:+UseProfiledCode and -XX:ProfiledCodeGenerate flags control PGO in HotSpot. GraalVM: GraalVM offers a Just-In-Time (JIT) compiler with advanced optimization capabilities. It can utilize profile data for improved performance. GraalVM’s native-image tool allows you to generate a native binary with profile-guided optimizations. Other JVMs: JVMs that support PGO may have their own set of flags and options. Consult the documentation for your specific JVM implementation to learn how to enable PGO. It’s important to note that some JVMs, like HotSpot, may automatically collect profiling data during regular execution without requiring explicit PGO flags. Analysis and Tuning Once you have collected profiling data and enabled PGO during compilation, the next step is to analyze the data and apply optimizations. Here are some considerations for analysis and tuning: Identify Performance Bottlenecks: Analyze the profiling data to identify performance bottlenecks, such as frequently called methods, hot code paths, or memory-intensive operations. Optimization Decisions: Based on the profiling data, make informed decisions about code optimizations. Common optimizations include method inlining, loop unrolling, memory access pattern improvements, and thread synchronization enhancements. Optimization Techniques: Implement the chosen optimizations using appropriate techniques and coding practices. For example, if method inlining is recommended, refactor your code to inline frequently called methods where it makes sense. Benchmarking: After applying optimizations, benchmark your application to measure the performance improvements. Use profiling tools to verify that the optimizations have positively impacted the bottlenecks identified during profiling. Reiteration Performance optimization is an ongoing process. As your application evolves and usage patterns change, periodic reprofiling and optimization are crucial for maintaining peak performance. Continue to collect profiling data during different phases of your application’s lifecycle and adapt your optimizations accordingly. Conclusion In summary, Profile-Guided Optimization (PGO) serves as a potent tool in the Java developer’s toolkit, offering the means to elevate the performance of applications. By leveraging runtime profiling data to inform optimization decisions, PGO empowers developers to tailor their code enhancements to the specific usage patterns encountered in the real world. Whether it involves method inlining, loop optimization, or memory access pattern refinement, PGO stands as a catalyst for significantly enhancing the efficiency and speed of Java applications, rendering them more resource-efficient. As you embark on the journey to optimize your Java applications, consider PGO as a powerful ally to unleash their full potential, ensuring they continually deliver top-tier performance.
Are you interested in joining the cloud-native world and wondering what cloud-native observability means for you? Did you always want to know more about instrumentation, metrics, and your options for coding with open standards? Are you a Java developer and looking for a working example to get started instrumenting your applications and services? Look no further, as this article provides you with an easy-to-understand guide to instrumenting your Java using open standards. In this article, you’ll learn what open source metrics are, the projects used to collect them, the libraries available to you as a Java developer for metrics instrumentation, apply instrumentation creating a fully working example in Java for the main four metrics types, and finally, set up metrics collection to explore your Java metrics in real-time. Let’s start with some background and explore your options for open-source metrics projects. Open Source Metrics There are many reasons that should lead you to the projects in the Cloud Native Computing Foundation (CNCF) when you are investigating cloud native observability. When you are investigating how you are going to integrate metrics instrumentation with your Java applications and services, you quickly land on the Prometheus project. There has been a lot written about why Prometheus is a great fit for cloud-native observability as many consider it the de facto standard of metrics instrumentation, transportation, collection, and querying. There is wide adoption of this project and its protocols to consider it an industry standard with a vibrant community of exporters, natively instrumented projects, and client libraries (as of the time of this writing): 960+ exporters (agents exposing Prometheus metrics) 50+ natively instrumented projects 20+ instrumentation client libraries It’s one of those instrumentation client libraries that we will be exploring on our journey to instrumenting our Java applications and services. Java Client Library Java has been a popular and much-used language over the years in many organizations. It is nice to find that we have a Java client library for instrumentation as part of the Prometheus metrics project ecosystem. This library recently reached version 1.0.0 status and was a good reason for me to update my example Prometheus Java Metrics Demo project that was used for an instrumentation lab in my Prometheus free online workshop. Looking at the documentation, we can jump to the “Getting Started” section where you find a menu entry for metric types. This section outlines the types of metrics that this library implements and talks about how it’s based on OpenMetrics, currently a CNCF sandbox project. The metrics it’s supporting are: Counter – The most common metric type; counters only increase, never decrease Gauge – For current measurements, such as the current temperature in Celsius Histogram – For observing distributions, like latency distributions for services Summary – Also for observing distributions, but summaries maintain quantiles Info StateSet GaugeHistogram In this article, we’ll focus on the top four metrics, as the last three are seldom used, based on edge cases, and are not officially part of the Prometheus core metrics API. Instrumenting Java Example To explore how you instrument a Java application or service, you want to get hands-on example code, right? Well, I’m the same way. So here’s a project I put together that is basically a sample Java application or service with the top four Java client library-supported metrics implemented. The easiest way to get started with my Prometheus Java Metrics Demo is to either clone it or download one of the formats provided and unpack it. The README file provides all the instructions you need to install it, but let’s first look at the basic outline of the project. Download and unzip the project. Import into your favorite IDE for Java development (I’m using VSCode). There are a few important files in this project for you to focus on; pom.xml, BasicJavaMetrics.java, and the Buildfile. Dependencies If you explore the pom.xml file, you’ll see the dependencies on the Java client are sorted out for you. XML <dependencies> <dependency> <groupId>io.prometheus</groupId> <artifactId>prometheus-metrics-core</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>io.prometheus</groupId> <artifactId>prometheus-metrics-instrumentation-jvm</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>io.prometheus</groupId> <artifactId>prometheus-metrics-exporter-httpserver</artifactId> <version>1.0.0</version> </dependency> <dependency> <groupId>org.eclipse.jetty</groupId> <artifactId>jetty-servlet</artifactId> <version>9.4.15.v20190215</version> </dependency> </dependencies> If you build this project you are going to get a JAR file from the BasicJavaMetrics.java file. XML <!– Build a full jar with dependencies –> <plugin> <artifactId>maven-assembly-plugin</artifactId> <configuration> <archive> <manifest> <mainClass>io.chronosphere.java_apps.BasicJavaMetrics</mainClass> </manifest> </archive> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> This single class file, BasicJavaMetrics.java, is used in this project to illustrate the setup and instrumentation you need to make use of the four metrics types discussed above. Basic Java Metrics The example is a working Java class with a main method where all of the action happens, both metrics instrumentation and the mocked application or service code. To keep the confusion out of this Java example, there are two parts to the code and it’s all found in the main method of this Java class. The first part is setting up the instrumentation of a counter, a gauge, a histogram, and a summary. Along with the actual metrics instrumentation, a web server is started to serve the metrics endpoints for Prometheus to scrape or collect. This first part finishes with a statement in the system log that your web server has started, the path it’s listening on, and that your Java metrics setup is ready. Java public static void main(String[] args) throws InterruptedException, IOException { // Set up and default Java metrics. //JvmMetrics.builder().register(); // uncomment to see all JVM metrics. // Initialize Counter. Counter counter = counter(); // Initialize Gauge. Gauge gauge = gauge(); // Initialize Histogram. Histogram histogram = histogram(); long start = System.nanoTime(); // Initialize Summary. Summary summary = summary(); // Start thread and apply values to metrics. Thread bgThread = new Thread(() -> { while (true) { try { counter.labelValues(“ok”).inc(); counter.labelValues(“ok”).inc(); counter.labelValues(“error”).inc(); gauge.labelValues(“value”).set(rand(-5, 10)); histogram.labelValues(“GET”, “/”, “200”).observe(Unit.nanosToSeconds(System.nanoTime() – start)); summary.labelValues(“ok”).observe(rand(0, 5)); TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } } }); bgThread.start(); // Set up web server. HTTPServer server = HTTPServer.builder() .port(METRICS_PORT) .buildAndStart(); System.out.println(“HTTPServer listening on port http://localhost:” + server.getPort() + “/metrics”); System.out.println(“”); System.out.println(“Basic Java metrics setup successful…”); System.out.println(“”); The second part is the actual Java code that you would write to implement your application or service. To not distract you from learning about instrumenting metrics, the code is simulated by a statement to the system log that your Java application or service has started. Java // Insert your code here for application or microservice. System.out.println(“My application or service started…”); System.out.println(“”); Below, you see the output once this Java project has been compiled (mvn clean install) and started (java -jar target/java_metrics-1.0-SNAPSHOT-jar-with-dependencies.jar): Shell HTTPServer listening on port http://localhost:7777/metrics Basic Java metrics setup successful… My application or service started… The web server reports that the Java metrics are being exposed for any Prometheus instance to scrape (collect) on port 7777, then follows that both the four basic metrics have been successfully initialized and the application code started. Congratulations: you’re now running a fully instrumented Java application or service! The Build File For those looking to run this example in a container, you are provided with a build file to be used with either Podman or Docker container tooling. Just run the following to build and start the container image. # Build project jar file. $ mvn clean install # Build the container image using podman or docker. $ podman build -t basic-java-metrics -f Buildfile $ docker build -t basic-java-metrics -f Buildfile # Run image mapping from 7777 local machine port to the # exposed 7777 port on running container. podman run -p 7777:7777 basic-java-metrics docker run -p 7777:7777 basic java metrics You’re now able to find your fully instrumented Java application or service in a container at http://localhost:7777. Exploring Your Metrics When you visit your exposed metrics URL on http://localhost:7777/metrics, you should see something like this: If you want to collect these metrics using a Prometheus instance, you just need to ensure that you add the following target to your Prometheus configuration file (often prometheus.yml): YAML scrape_configs: # Scraping java metrics. – job_name: “java_app” static_configs: – targets: [“localhost:7777”] After restarting the Prometheus instance to pick up the new changes, you can see that they are being scraped and stored for visualization as needed. For example, here you can see them in the command completion feature of the Prometheus query console: If you are looking to explore basic Java instrumentation of metrics with open source, creating each metric type from scratch in your own Java class, then you will have a great time in the lab on “Instrumenting Applications” from the free online getting started with the Prometheus workshop (linked earlier). Check it out today and any feedback you might have is welcome!
Jakarta EE is a set of specifications: an open-source platform that offers a collection of software components and APIs (Application Programming Interface) for the development of enterprise applications and services in Java. During these last years, Jakarta EE has become one of the preferred frameworks for professional software enterprise applications and services development in Java. There are probably dozens of such open-source APIs nowadays, but what makes Jakarta EE unique is the fact that all these specifications are issued from a process originally called JCP (Java Community Process), currently called EFSP (Eclipse Foundation Specification Process). These specifications, initially called JSR (Java Specifications Request) and now called simply Eclipse specifications, are issued from a consortium, bringing together the most important organizations in today’s Java software development field, originally led by the JCP and now stewarded by the Eclipse Foundation. Consequently, as opposed to its competitors where the APIs are evolving according to unilateral decisions taken by their implementers, Jakarta EE is an expression of the consensus of companies, user groups, and communities. From J2EE to Java EE Jakarta EE is probably the best thing that has happened to Java since its birth more than 20 years ago. Created in the early 2000s, the J2EE (Java 2 Enterprise Edition) specifications were an extension of the Java programming language known also as J2SE (Java 2 Standard Edition). J2EE was a set of specifications intended to facilitate the development of Java enterprise-grade applications. It was also intended to describe a unified and standard API, allowing developers to deal with complex functionalities like distributed processing, remote access, transactional management, security, and much more. They were maintained by the JCP as explained, and led by an executive committee in which Sun Microsystems, as the original developer of the Java programming language, had a central role. The beginning of the year 2000 saw the birth of J2EE 1.2. This was the first release of what was about to be later called the “server-side” Java. That time was the epoch of the enterprise multi-tier applications that some people today describe as monoliths, having graphical user interfaces on their web tier, business delegate components like stateless EJB (Enterprise Java Beans), MDB (Message Driven Beans), and other remote services on the middle tier, and JPA (Java Persistence API) components on the data access tier. Clusters, load-balancers, fail-over strategies, and sticky HTTP sessions were parts of the de facto software architecture standard that every enterprise application had to meet. All these components were deployed on application servers like WebSphere, WebLogic, JBoss, Glassfish, and others. From the year 2006 and ahead, Sun Microsystems decided to simplify the naming convention of the J2EE specifications, which were in their 1.4 version at that time, and, starting with the 5th release, to rename them as Java EE (Java Enterprise Edition). Similarly, the standard edition became Java SE (Java Standard Edition). This same year, Sun Microsystems was purchased by Oracle, who became the owner of both Java SE and Java EE. During this time, the JCP continued to produce hundreds of specifications in the form of JSRs covering all the enterprise software development aspects. The complete list of the JSRs may be found here. Java EE was a huge success – a real revolution in Java software architecture and development. Its implementations, open-source or commercial products, were ubiquitous in the enterprise IT landscape. Oracle has inherited two of them: Glassfish, which was the open-source reference implementation by Sun Microsystems, and WebLogic, a commercial platform obtained further to the purchase of BEA. But Oracle was and stays an RDBMS software vendor and, despite being the new owner of Java SE/EE as well as of Glassfish and WebLogic, the relationships with the Java and Java EE community were quite sharp. Consequently, Java SE became a commercial product available under a license and requiring a subscription while Java EE wasn’t maintained anymore, and finished by being donated in 2017 to the Eclipse Foundation. Its new name was Jakarta EE. From Java EE to Jakarta EE With Jakarta EE, the server-side Java started a new life. It was first Jakarta EE 8, which kept the original namespace javax.*. Then came Jakarta EE 9, which was a hybrid release, as it used some original namespace prefixes together with the new ones jakarta.*. Finally, the current release, Jakarta EE 10, among other many novelties, provides a fully coherent new namespace. The new Jakarta EE 11 release is in progress and scheduled to be delivered in June 2024. The architecture of the Java enterprise-grade services and applications continued to evolve under the Oracle stewardship, but the Java EE specifications were in a kind of status quo before becoming Eclipse Jakarta EE. The company didn’t really manage to set up a dialogue with users, communities, work groups, and all those involved in the recognition and promotion of the Java enterprise-grade services. Their evolution requests and expectations weren’t being addressed by the editor, who didn’t seem interested in dealing with their new responsibility as the Java/Jakarta EE owner. In such a way that little by little, this has led to a guarded reaction from software architects and developers, who began to prefer and adopt alternative technological solutions to application servers. Several kinds of solutions started to appear many years ago on the model of Spring, an open-source Java library, claiming to be an alternative to Jakarta EE. In fact, Spring has never been a true alternative to Jakarta EE because, in all its versions and millings (including but not limited to Spring Core, Spring Boot, and Spring Cloud), it is based on Jakarta EE and needs Jakarta EE in order to run. As a matter of fact, an enterprise-grade Java application needs implementations of specifications like Servlets, JAX-RS (Java API for RESTful Web Services), JAX-WS (Java API for XML Web Services), JMS (Java Messaging Service), MDB (Message Driven Bean), CDI (Context and Dependency Injection), JTA (Java Transaction API), JPA (Java Persistence API) and many others. Otherwise, Spring doesn’t implement any of these specifications. Spring only provides interfaces to these implementations, relies on them, and, as such, is only a Jakarta EE consumer or client. So, Spring is a Jakarta EE alternative as much as the remote control is an alternative to the television set. Nevertheless, the marketing is sometimes more impressive than the technology itself. This is what happened with Spring, especially since the emergence of Spring Boot. While trying to find alternative solutions to Jakarta EE and to remedy issues like the apparent heaviness and the expansive prices of application servers, certain software professionals have adopted Spring Boot as a development platform. And since they needed Jakarta EE implementations for even basic web applications anyway, as shown above, they deployed these applications in open-source servlet engines like Tomcat, Jetty, or Undertow. For more advanced features than just servlets like JPA or JMS, Spring Boot provides integration with Active MQ or Hibernate. And should these software professionals need even more advanced features, like JTA for example, they were going fishing on the internet for free third-party implementations like Atomikos. Additionally, in the absence of an official integration, they tried by themselves to integrate these features on their servlet engine, with all the risks that this entails. Other solutions closer to real Jakarta EE alternatives have emerged as well, and among them, Netty, Quarkus, or Helidon are the best-known and most popular. All these solutions were based on a couple of software design principles like single concern, discrete boundaries, transportability across runtimes, auto-discovery, etc., which were known since the dawn of time. But because the software industry continuously needs new names, the new name that has been found for these alternative solutions is microservices. More and more microservice architecture-based applications appeared during the next few years, at such a point that the word “microservice” became one of the most common buzzwords in the software industry. And in order to optimize and standardize the microservices technology, the Eclipse Foundation decided to apply to microservices the same process that was used in order to design the Jakarta EE specifications. Thus, the Eclipse MicroProfile was born. The Eclipse MicroProfile is, like Jakarta EE, a group of specifications trying to inspire from several microservices existing frameworks such as Spring Boot, Quarkus, Helidon, and others, and to unify their base principles in a consistent and standard API set. Again, like Jakarta EE, the Eclipse MicroProfile specifications have to be implemented by software editors. While some of these implementations like OpenLiberty, Quarkus, Helidon, and others only concern the Eclipse MicroProfile specifications, some others like Wildfly, Red Hat EAP, Glassfish, or Payara are trying to do the splits and unify Jakarta EE and Eclipse MicroProfile in a consistent and unique platform. Conclusion As a continuation of its previous releases, Jakarta EE is a revolution in Java enterprise-grade applications and services. It retains the open-source spirit and is guided by collaboration between companies, communities, and user groups rather than commercial goals alone.
It’s taken nearly 30 years. Java 1.21’s introduction of Virtual Threads will finally make multitasking in Java almost effortless. In order to fully appreciate their revolutionary nature, it is helpful to take a look at the various imperfect solutions offered by Java over the years to solve the “do useful work while we wait for something else” problem. Java 1 The Introduction of Java version 1 in 1995 was remarkable. A strongly-typed, object-oriented, C-like-syntax language which offered many features, including easy-to-use Threads. The Thread class represented an object that would run selected code in a separate thread from the main execution thread. The Thread object itself was a wrapper for an actual OS-level thread known as a platform thread, a.k.a. kernel thread. The logic to be executed was described by implementing a Runnable interface. Java took care of all of the complexity of launching and managing this separate thread. Now it will be almost trivial to perform multiple tasks simultaneously, or so it would seem. Consider the following example: Java package example.java1; public class Simple { public void threadExample() { Runnable r = new Runnable (){ public void run() { System.out.println(“do something in background”);; } }; Thread t = new Thread(r); t.start(); } } The main thread describes an implementation of a Runnable interface, which is doing trivial work in this case. A second thread, let’s call it the background thread, is instantiated using this Runnable. The start() method commands the background thread to work while the main thread resumes whatever work it needs to do. Excellent! But an immediate challenge faced by first generation Java developers was how to pass a value from the background thread back to the main thread. Consider this example where we wish to execute a long-running, blocking process in the background, then later use its returned value in the main thread: Java package example.java1; import java.util.concurrent.TimeUnit; public class PassingValues { // Variable shared between threads: private static String result; // Utility method to simulate a delay: private static void delay(int i) { try { TimeUnit.SECONDS.sleep(i); } catch (InterruptedException e) {} } // Create an instance of a Runnable that updates the shared variable: private static Runnable workToDo = new Runnable (){ public void run() { String s = longRunningProcess(); synchronized (this) { result = s; // Blocking! Safely update shared variable. } } // Simulate a long running, blocking process: private String longRunningProcess() { delay(2); // blocking! return “Hello World!”; } }; public static void main(String[] args) { // Create a Thread, pass the Runnable, and start it: Thread thread = new Thread(workToDo); thread.start(); // Do other work… // Wait for the thread to complete: try { thread.join(); // blocking! } catch (InterruptedException e) { e.printStackTrace(); } // Use results String s = null; synchronized (workToDo) { s = result; // Blocking! Safely acquire the shared variable } System.out.println(“Result from the thread: ” + s); } } The Runnable implementation describes a long-running process, here simulated by calling sleep(). The main thread spawns a background thread and launches it to perform work. The main thread is (hopefully) able to perform useful work while the background thread is busy doing separate work, fantastic! However, at a certain point, the main thread must obtain the result from the background thread before it can continue, and may have no safe choice but to wait until the result is ready. Notice the “// blocking!” comments in the code. The background thread is blocked while waiting for a result. Real-world examples of this blocking occur when calling a database, external web service, etc. While blocked, all of the resources associated with a thread are effectively stuck waiting, unable to do any useful work. The main thread is blocked while waiting for the background thread, unable to do any useful work. Synchronization blocks are used to safely access variables used by multiple threads, a bit excessive in this case, but useful to illustrate other blocking points. The platform threads seen here are relatively lightweight when compared to spawning separate OS-level processes, but large numbers of them can consume a lot of JVM resources. The threads above spend most of their time waiting for something to do. In this example, the enemy is blocking, simulated by the sleep() method. The imperfect solution offered by multithreading doesn’t solve the problem, it merely parks the blocking in a background thread. Eventually threads will block each other when coordinating activities, seen here in the join() and synchronization blocks. Blocking is a terrible waste of resources whenever it happens. For years Java treated blocking as an unsolvable obstacle, using the imperfect, resource-heavy solution of multithreading to address it. Against this backdrop, one puzzle faced by Java developers is how a single-threaded scripting language like JavaScript can outperform multi-threaded, compiled Java code at certain tasks while using fewer resources. Consider this year-2000 JavaScript code which achieves the same result as the previous example using a fraction of its memory: JavaScript var result; // Function to simulate a long-running, blocking process // with an inline callback function longRunningProcess(callback) { setTimeout(() => { callback(“Hello World!”); }, 2000); // blocking! } // Call the function with an inline callback longRunningProcess((data) => { result = data; // Do other work… // Use results console.log(“Result from the inline callback:”, result); }); JavaScript uses an event loop within a single thread. Callbacks are function references used to identify work to be done when a task is completed. Rather than tie up the one-and-only thread waiting for work to complete, JavaScript places the callback reference on its internal message queue and continues working. The callback will be executed asynchronously by the event loop at a later time. There is no blocking, or to be more precise, there is no blocking penalty; any waiting for results does not impact the execution thread. Notice the “longRunningProcess()” in both examples. The signatures are different; Java expects a value returned when the work is done, JavaScript expects a callback function to be invoked when the work is done. Java demonstrates synchronous execution, JavaScript demonstrates asynchronous. Both involve waiting for some work to be done, but the synchronous occupies a thread to do this. To be fair, multiple Java threads will outperform JavaScript if 1) the threads are busy most of the time doing useful in-memory work 2) coordination between threads is absent or minimal. These are not the case in many real-world situations. One more consideration, the code shown here creates a Thread and allows it to be disposed of through garbage collection, wasteful of time and memory. Java 1 didn’t provide any Thread pooling capability, so early Java developers created their own with management logic to acquire and release threads from the pool. To summarize threading in Java 1: Multithreading doesn’t solve the blocking problem, and blocking is often the main problem. Multithreading is resource intensive vs single-threaded, asynchronous models (i.e. JavaScript). Threads are low-level constructs requiring code to manage and coordinate. Later releases of Java would address all of these concerns, though it would take a while to get there. Java 1.2 (1998) and 1.3 (2000) made only minor contributions to threading, the next batch of improvements would need to wait until the 2004 release of Java 5 (1.5). Java 5 Java 1.5 (2004) introduced several features to improve thread management and coordination of work between threads, we’ll focus on Future, ExecutorService, and Callable from the java.util.concurrent package. ExecutorService abstracts away the mechanism by which something is executed. Implementations exist for single thread, thread pools of fixed size, thread pools that grow and shrink, etc. A Callable is like a Runnable except it can return a value. To hold the value returned from a Callable, we need a special object which can live in one thread but hold the return value populated by the Callable. This is the Future object, which represents a promise from one thread to populate a value for another thread. Java package example.java5; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; public class Demo { // Utility method to simulate a delay: private static void delay(int i) { try { TimeUnit.SECONDS.sleep(i); } catch (InterruptedException e) {} } // Create a Callable that returns a result variable private static Callable<String> workToDo = new Callable<>() { public String call() throws Exception { return longRunningProcess(); }; // Simulate a long running, blocking process private String longRunningProcess() { delay(2); // blocking! return “Hello World!”; } }; public static void main(String[] args) { // Create a singleThread ExecutorService, and submit the work: Future<String> future = Executors .newSingleThreadExecutor() .submit(workToDo); // Do other work… // Wait for the thread to complete: String result = “”; try { result = future.get(); // blocking! } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } // Use results System.out.println(“Result from the thread: ” + result); } } Notice the differences from Java 1: The Callable is better than Runnable when the background thread needs to return a value to the main thread. No more shared variable or synchronization between the two threads. The Executors class contains factory methods for creating an ExecutorService. We don’t need to build our own thread pools. The Future represents a promised result. The get() method returns the result returned from the other thread, blocking the main thread if the result is not yet available. We can even use it to cancel the other thread. Notice what has not changed: the enemy here is still blocking. The lines commented with “// blocking!” indicates the main points where it can occur, the long-running process simulated by the sleep() method, and the future.get() method, which may result in blocking if the background thread is not finished. While Java 5 provides more syntactic elegance and ease-of-use, we are still using threads as an imperfect solution. In fact, Java 5 may have made the situation worse! By making multithreading easier to use, more developers explore it as the answer to various performance problems. It is common sense to think that if we break up a single task into many parts worked on in parallel, the overall task will go faster. Unless the various threads are busy a large percentage of the time and have little need to coordinate, Java programs generally consume more memory resources than other models and can even demonstrate worse performance due to the overhead of context switching. Additionally, a need was growing to chain multiple pieces of work together, executing each part in the background. Consider the following less-than-elegant solution: Java package example.java5; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; public class ChainingAsync { public static void main(String[] args) { try { ExecutorService svc = Executors.newSingleThreadExecutor(); Future<String> future1 = svc.submit(new WorkToDo1()); // Do other work… String result1 = future1.get(); // blocking! Future<String> future2 = svc.submit(new WorkToDo2(result1)); // Do other work… String result2 = future2.get(); // blocking! Future<String> future3 = svc.submit(new WorkToDo3(result2)); // Do other work… String result3 = future3.get(); // blocking! System.out.println(“Result from the thread: ” + result3); } catch (Exception e) {} } private static class WorkToDo1 implements Callable<String> { public String call() throws Exception { delay(1); // blocking! return “Hello”; } } private static class WorkToDo2 implements Callable<String> { private final String prefix; public WorkToDo2(String prefix) { this.prefix = prefix; } public String call() throws Exception { delay(1); // blocking! return prefix + ” World”; } } private static class WorkToDo3 implements Callable<String> { private final String prefix; public WorkToDo3(String prefix) { this.prefix = prefix; } public String call() throws Exception { delay(1); // blocking! return prefix + “!!”; } } // Utility method to simulate a delay: private static void delay(int i) { try { TimeUnit.SECONDS.sleep(i); } catch (InterruptedException e) {} } } This is getting messy. Chaining work requires custom callables to receive and return required values. This code attempts to multitask as much as possible. Its effectiveness depends on how much “other work” is performed in the main thread. If it is insignificant, we are consuming more resources than are needed to simply call three functions in order: Java package example.java5; import java.util.concurrent.Callable; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; public class ChainingSync { public static void main(String[] args) { String result1 = doThing1(); String result2 = doThing2(result1); String result3 = doThing3(result2); System.out.println(“Result from a single thread: ” + result3); } private static String doThing1() { delay(1); return “Hello”; } private static String doThing2(String input) { delay(1); return input + ” World”; } private static String doThing3(String input) { delay(1); return input + “!!”; } // Utility method to simulate a delay: private static void delay(int i) { try { TimeUnit.SECONDS.sleep(i); } catch (InterruptedException e) {} } } Even programmers with very little experience could follow the second example, yet it runs with fewer resources than the first. Now consider a JavaScript equivalent, but with no blocking: JavaScript (function() { setTimeout(function doThing1() { const result1 = “Hello”; setTimeout(function doThing2() { const result2 = result1 + ” World”; setTimeout(function doThing3() { const result3 = result2 + “!!”; console.log(“Result from JavaScript:”, result3); }, 1000); }, 1000); }, 1000); })(); (There is a reason you chose to work in Java rather than JavaScript.) The horrifying JavaScript syntax you see here is referred to as “callback hell”. On the bright side, there is no blocking, only a single thread is used, and it consumes only a fraction of the resources of the previous Java examples. Java would need a better way to chain Futures (promises) from one to another without callback hell, and of course we still need to do something about the blocking. (To be fair, modern JavaScript uses Promises and async/await to more clearly implement the logic you see here, but this article is a history lesson.) Java 7 The next two releases of Java, 1.6 (2006) and 1.7 (2011) addressed neither concern. Java 1.7 did introduce a novel improvement known as the ForkJoinPool. This new feature provided a way to divide a set of work into chunks for parallel processing. The ForkJoinPool together with RecursiveTask made it easy to drop off a large amount of work to be performed, say in a Collection, and have it broken down recursively into smaller and smaller chunks for parallel processing by different threads (forked), then later collect the results of all the finished work (joining). While interesting in its own right, a byproduct of this development was the introduction of a work-stealing algorithm, where idle threads in the ForkJoinPool can “steal” tasks from other busy threads. The ForkJoinPool is a good thread pool option for general use, but it doesn’t help us with our blocking issue, since blocked threads are not considered “idle” in this context. Interestingly, the work-stealing concept foreshadowed the concept that a blocked thread might be released to do other work, which is the essence of Java 21’s Virtual Thread solution. Java 8 Java 1.8 (2014) was a BIG release. On the multitasking front, the star addition was the CompleteableFuture, a big improvement over the Future from Java 1.5. To quickly and easily run work in a separate thread, the CompleteableFuture class provides a number of “*Async” methods which execute the desired code in another thread provided by a ForkJoinPool (by default). The caller simply calls the method directly without fumbling with Threads or an ExecutorService. Callback methods can register the actions to take place when the asynchronous work is done. Finally, Lambda syntax reduces the heavy boilerplate seen in our Java 5 example: Java package example.java8; import java.util.concurrent.CompletableFuture; import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit; public class Demo { // Utility method to simulate a delay private static void delay(int i) { try { TimeUnit.SECONDS.sleep(i); } catch (InterruptedException e) {} } // Simulate a long running, blocking process private static CompletableFuture<String> longRunningProcess() { return CompletableFuture.supplyAsync(() -> { delay(2); // blocking! return “Hello World!”; }); } public static void main(String[] args) { // Create a CompletableFuture longRunningProcess().thenAccept(result -> { System.out.println(“Result from CompletableFuture: ” + result); }); // Do other work… // Introduce a delay to prevent the JVM // from exiting before work is complete. delay(3); } } Notice: The calling thread no longer has to fumble with Executors or ExecutorService. The main thread does not block waiting for completion of the asynchronous execution. Instead, work to be done is registered in a callback. The final delay() is not needed in a real-world application. It is used here only to prevent the main non-daemon thread from exiting before the callback in the daemon thread can be demonstrated. The callback shown above allows for the possibility to chain several actions together. CompleteableFuture supports a fluent interface to allow composition of asynchronous workflows; a pipeline of asynchronous operations where each executes after the previous one completes. Many methods are provided which accept Suppliers, Consumers, Functions, and other interfaces which can be replaced with Lambda expressions. Java package example.java8; import java.util.concurrent.CompletableFuture; import java.util.concurrent.TimeUnit; public class ChainDemo { // Utility method to simulate a delay private static void delay(int i) { try { TimeUnit.SECONDS.sleep(i); } catch (InterruptedException e) {} } private static String doThing1() { delay(1); return “Hello”; } private static String doThing2(String s) { delay(1); return s + ” World”; } private static String doThing3(String s) { delay(1); return s + “!!”; } public static void main(String[] args) { CompletableFuture .supplyAsync(() -> doThing1()) .thenApplyAsync(s -> doThing2(s)) .thenApplyAsync(s -> doThing3(s)) .thenAcceptAsync(System.out::println); System.out.println(“Main thread continues, unhindered.”); // Introduce a delay to prevent the JVM // from exiting before work is complete. delay(4); } } Far simpler and cleaner than the Java 5 equivalent! Each fluent method you see here returns a CompleteableFuture. The “Async” methods run asynchronously in a thread provided by a ForkJoinPool. This example establishes a chain of callbacks, each executed in a background thread, each executed when the previous callback completes. In this case, the “Main thread…” message actually displays before the message produced by the CompleteableFuture since the context switching between threads in the chain consumes time. CompleteableFuture is a great pragmatic improvement over earlier models; much easier to use, and addresses the practical concerns of most developers, executing long-running calls in the background, while adding the ability to assemble chains of actions. However, CompletableFuture does nothing special when the background thread blocks, or in cases where we must wait for the ultimate result before returning to a caller. The chaining of multiple actions to be completed in other threads is also making our debugging, testing, and exception handling work more difficult. Exceptions occurring within our own code, the doThing*() methods, will not pose a challenge. However, if an exception is thrown within the asynchronous framework, say because one of our methods returned a null, the resulting NullPointerException would be very tough to track down. The resulting stack trace might not contain any references to our code at all. CompleteableFuture does raise an interesting possibility: what if every method in our entire application allows for execution in another thread, including calls to databases, web services, etc.? If every method returned a CompleteableFuture, it might be possible to construct an entire application’s logic as a chain of steps to be completed by any thread available in a pool. If it were possible to replace every blocking call with a reference to a callback, it might be possible to free threads from blocking, and gain a tremendous amount of productive work out of one or two threads. The stage was set for reactive programming… Java 9 Reactive programming builds upon the paradigm seen with chaining and composing CompleteableFutures, but its origins are elsewhere. Before the java.util.concurrent.Flow types were introduced in Java 9 (2017), Java developers were already using RxJava and Spring’s Reactor. The latter used the Reactive Streams library which was later incorporated into Java 9. Reactive programming is not about multithreading at all. It is about reducing a workload to a stream of events that can be processed asynchronously. In reactive programming, everything can be represented as a stream. Streams contain events flowing in sequence. For example, the results from a database call can be viewed as a stream of events, the rows that match our query. The results of a web service call can be viewed as a stream, even if it is a stream of one thing. The term “stream” here should not be confused with java.util.stream concepts from Java 8; while there are syntactical similarities, the reactive streams are inherently asynchronous, while Java 8 streams provide an alternative to complex loops with variables to hold state. Reactive is tough to learn, so take your time. I recommend starting with Dave Syer’s excellent three part exploration and Andre Saltz’s fantastic introduction. Java 9’s java.util.concurrent.Flow classes are low-level definitions of reactive constructs (Publisher and Subscriber) and require a lot of code to construct a simple stream. For our journey, let’s consider an RxJava alternative to the CompleteableFuture example shown earlier: Java package example.java9; import java.util.concurrent.TimeUnit; import io.reactivex.rxjava3.core.Single; public class RxJavaDemo { // Utility method to simulate a delay private static void delay(int i) { try { TimeUnit.SECONDS.sleep(i); } catch (InterruptedException e) {} } private static String doThing1() { delay(1); return “Hello”; } private static String doThing2(String input) { delay(1); return input + ” World”; } private static String doThing3(String input) { delay(1); return input + “!!”; } public static void main(String[] args) { Single.fromCallable(() -> doThing1()) .map(RxJavaDemo::doThing2) .map(RxJavaDemo::doThing3) .doAfterSuccess(System.out::println) .subscribe(); // Subscribe to start the reactive pipeline } } RxJava has two types of streams, Single and Observable. Single is designed for streams expecting a single event, so it is appropriate to use here. Our stream starts with one event containing the “Hello” String returned from the doThing1() method, but it is a stream. The map() function is an operator. It takes the event from one stream, processes it, and sends it out in a new, separate stream. In this case, the doThing2() method merely concatenates ” World” to the end of the incoming String. The next map() operator takes in the event, calls doThing3(), which concatenates “!!” to the incoming String, and produces yet another output stream. The doAfterSuccess() function operates on the only event it encounters in the Single stream by printing to the console. The subscribe() call is critical – the earlier code merely defines the algorithm, the subscribe() call effectively commands it to begin running. Here is another implementation, based on Spring’s Reactor. Note the similarities: Java package example.java9; import java.util.concurrent.TimeUnit; import reactor.core.publisher.Mono; public class ReactorDemo { // Utility method to simulate a delay private static void delay(int i) { try { TimeUnit.SECONDS.sleep(i); } catch (InterruptedException e) {} } private static String doThing1() { delay(1); return “Hello”; } private static String doThing2(String input) { delay(1); return input + ” World”; } private static String doThing3(String input) { delay(1); return input + “!!”; } public static void main(String[] args) { Mono.fromCallable(() -> doThing1()) .map(ReactorDemo::doThing2) .map(ReactorDemo::doThing3) .doOnNext(System.out::println) .subscribe(); } } Reactor uses Mono and Flux instead of RxJava’s Single and Observable. Our Mono begins with one event containing the “Hello” String, just as the RxJava version. The map() and subscribe() methods are equivalent to their RxJava counterparts, doOnNext() is roughly equivalent to doAfterSuccess(). In both examples, the reactive streams are executed directly in the main thread when subscribe() is called. Once the streams are exhausted, program execution continues. There is no need for the delay() call seen in earlier examples because all work is done in the main thread. Let me repeat that: in the examples shown here, all work is done in the main thread. The essential difference from the CompleteableFuture chain constructed earlier, other than syntax, is that the reactive approach operates by breaking up the work into chunks (events) that will be processed asynchronously. Asynchronous doesn’t mean “at the same time” or “in the background”, it means “NOT simultaneous or concurrent in time” (Merriam-Webster, emphasis mine). The model of execution used is much closer to JavaScript’s event loop than the multithreading approach seen in earlier Java versions. While we could ask either implementation to pool multiple threads to break up the event processing, it’s not necessary in these simple examples. Remember: the enemy in these examples is blocking, represented by the delay() / sleep() calls. Earlier examples tried to address blocking by multithreading, reactive tries to address it by asynchronous event handling. But do these reactive examples solve the blocking problem? If the calls to doThing1(), 2, and 3 make blocking calls, they will still block the entire thread, the main thread in these examples. The reactive approaches shown here each consume one thread instead of two so they are more frugal on machine resources. We could use Schedulers to pool up separate threads to process the asynchronous events as they come in, but ultimately in the case shown here those separate threads would be blocked most of the time. Of course, this is a contrived example using sleep() to simulate an actual blocking call to a database or web service, all called from a public static void main() method. An optimized example would utilize reactive HTTP clients (like WebClient) or database libraries (like R2DBC). These return reactive types (Flux/Mono) which can flow into upstream objects which return reactive types themselves. If we can construct a perfect reactive call stack to a modern HTTP server with non-blocking IO handlers, we can eliminate all blocking calls. If this article were primarily about reactive programming, I’d provide such an example, and suffice it to say it would eliminate all blocking with bare minimum use of threads. The essential difficulty for most developers is thinking in the reactive programming style. It requires a departure from easy-to-understand, step-by-step imperative programming. As with chains of CompleteableFutures, reactive programs make our debugging, testing, and exception handling work more difficult. To get the benefit of reactive programming, an application’s entire call stack must deal in reactive types; if we introduce a single blocking call, we’ve eliminated the essential benefit of the model. In such a case, we may achieve a result no better than CompleteableFuture, and give ourselves a migraine in the process. But we’ve solved the blocking problem! …At a cost measured in aspirin. To summarize where we are with reactive programming: We must learn to think in the reactive programming style rather than imperative programming. An application’s entire call stack must deal in reactive types. A single blocking call threatens the entire model. Debugging is tough, exception handling is perplexing. Let’s return to what we really want: the ability to make long-running, possibly blocking calls from our code without incurring the penalty of holding up a thread. Ideally, we’d like to tell the JVM it is ok for a thread to work on something else while we wait for a response. We would like to do this in imperative, step-by-step instructions without struggling with an asynchronous event model, and definitely without the callback syntax we saw in JavaScript. The stage is now set for Java 21 Virtual Threads… Java 21 Virtual Threads After version 9, Java switched to a six-month release cadence, so the next few years went by fast. In terms of multithreading or reactive programming, the next major advance would occur with Java 21’s (2023) introduction of Virtual Threads: Virtual Threads are a lightweight alternative to traditional Java platform threads backed by OS-level threads. Compared to platform threads they are wonderfully lightweight, it is trivial to launch thousands of them while your application is running. Pooling is completely unnecessary. Large numbers of virtual threads can run within each platform thread. All work is still ultimately done on platform threads but a special Continuation object is used to exchange which Virtual Thread is running based on when a blocking call is encountered. Compared with earlier models based on CompleteableFuture or Reactive Streams, coding with Virtual Threads is amazingly simple. Let’s go back to some code we haven’t seen since the Java 1 and 5 days when we would define all our work within simple Runnables or Callables: Java package example.java21; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.TimeUnit; public class Demo { // Utility method to simulate a delay private static void delay(int i) { try { TimeUnit.SECONDS.sleep(i); } catch (InterruptedException e) {} } private static String doThing1() { delay(1); return “Hello”; } private static String doThing2(String input) { delay(1); return input + ” World”; } private static String doThing3(String input) { delay(1); return input + “!!”; } // Create an instance of a Runnable that implements our logic private static Runnable myRunnable = () -> { String result1 = doThing1(); // Blocking, or so it seems… String result2 = doThing2(result1); // Blocking, or so it seems… String result3 = doThing3(result2); // Blocking, or so it seems… System.out.println(“Result from the thread: ” + result3); }; public static void main(String[] args) { // Create a Virtual Thread, pass the Runnable, and start it: Executors.newVirtualThreadPerTaskExecutor().submit(myRunnable); // Do other work… // Introduce a delay to prevent the JVM // from exiting before work is complete. delay(4); } } The Executors factory class from Java 5 is back with a new factory method, newVirtualThreadPerTaskExecutor(). This returns an ExecutorService which supplies a Virtual Thread rather than a standard platform thread. The Runnable will execute within a Virtual Thread which itself runs within a platform thread. The sleep delay has returned only because all work is being done in a daemon thread, which would be interrupted in our demo if the main thread exited. So what about the blocking? Virtual Threads run within platform threads served from a modified ForkJoinPool. A special internal object called the Continuation manages multiple Virtual Threads within a platform thread. When a blocking call is detected, code within the Virtual Thread makes a call to Continuation.yield() to allow another VirtualThread to have a turn running. The yield() process unmounts the VirtualThread from the platform thread, copies its stack memory into heap memory, then begins running the next Virtual Thread on its wait list. When an OS handler signals that a result from the blocking call is complete, it calls a Continuation.run() to place this VirtualThread back on the wait list for the platform thread. If this platform thread is busy, the work-stealing capability of the ForkJoinPool comes into play, and any other available platform thread can resume the work. The blocking penalty has been eliminated with a bit of memory management – much more efficient. But how does the VirtualThread detect blocking? Java 21 modified nearly every line of code in the JDK that could result in a blocking. These now call the yield() process when blocking begins. This was a massive change to the JDK, over one thousand files were part of the pull request. The result is that we can run normal imperative-style code in a Virtual Thread and completely ignore blocking penalties, because their ill effects have been eliminated. Virtual Threads allow the underlying platform thread to stay busy with other Virtual Threads while our code waits for results. Fewer threads do more work, just as we saw in Reactive Streams or JavaScript, but we don’t need to embrace advanced asynchronous techniques or callbacks to enjoy these benefits. It is hard to overstate the revolutionary impact this has on how we can code in Java from this point forward, replacing the need for elegant but hard-to-grasp techniques like CompleteableFuture or Reactive Streams with simple-as-Sunday imperative programming. Virtual Threads are not a silver bullet and are not appropriate for every situation. Virtual Threads impose a slight operational burden vs running directly in a platform thread, mainly due to the unmounting and remounting of memory when yield()/run() calls occur. Workloads which would not incur blocking, such as long running calculations, would be more appropriately implemented using the multithreading or reactive techniques described earlier in this article. Native code and code with synchronization blocks can run in Virtual Threads, but these tasks are pinned to a single platform thread, thus eliminating the work-stealing benefit. One possible danger to consider: the code in the example above runs efficiently only when running in a Virtual Thread vs a platform thread. The modifications added throughout the JDK codebase only take effect when code is running within a Virtual Thread. If this code were modified slightly, such as removing the Runnable or calling a different Executors factory method, the result would be the same inefficient blocking we’ve spent 25 years trying to avoid. It can sometimes be difficult for a busy developer to notice such a subtle change. To summarize Java Virtual Threads: The blocking penalty experienced since the early days of Java can be eliminated in most practical cases. Developers can read, write, test, debug, and handle exceptions using easy-to-use imperative style programming. Multithreading is still preferred in non-blocking situations, such as long-running calculations. I hope you’ve enjoyed this walk down memory lane culminating in the current state of the art. Be sure to explore the coding examples further if you like.
In the previous articles, you learned about the virtual threads in Java 21 in terms of history, benefits, and pitfalls. In addition, you probably got inspired by how Quarkus can help you avoid the pitfalls but also understood how Quarkus has been integrating the virtual threads to Java libraries as many as possible continuously. In this article, you will learn how the virtual thread performs to handle concurrent applications in terms of response time, throughput, and resident state size (RSS) against traditional blocking services and reactive programming. Most developers including you and the IT Ops teams also wonder if the virtual thread could be worth replacing with existing business applications in production for high concurrency workloads. Performance Applications I’ve conducted the benchmark testing with the Todo application using Quarkus to implement 3 types of services such as imperative (blocking), reactive (non-blocking), and virtual thread. The Todo application implements the CRUD functionality with a relational database (e.g., PostgreSQL) by exposing REST APIs. Take a look at the following code snippets for each service and how Quarkus enables developers to implement the getAll() method to retrieve all data from the Todo entity (table) from the database. Find the solution code in this repository. Imperative (Blocking) Application In Quarkus applications, you can make methods and classes with @Blocking annotation or non-stream return type (e.g. String, List). Java @GET public List<Todo> getAll() { return Todo.listAll(Sort.by(“order”)); } Virtual Threads Application It’s quite simple to make a blocking application into a virtual thread application. As you see in the following code snippets, you just need to add a @RunOnVirtualThread annotation into the blocking service, getAll() method. Java @GET @RunOnVirtualThread public List<Todo> getAll() { return Todo.listAll(Sort.by(“order”)); } Reactive (Non-Blocking) Application Writing a reactive application should be a big challenge for Java developers when they need to understand the reactive programming model and the continuation and event stream handler implementation. Quarkus allows developers to implement both non-reactive and reactive applications in the same class because Quarkus is built on reactive engines such as Netty and Vert.x. To make an asynchronous reactive application in Quarkus, you can add a @NonBlocking annotation or set the return type with Uni or Multi in the SmallRye Mutiny project as below the getAll() method. Java @GET public Uni<List<Todo>> getAll() { return Panache.withTransaction(() -> Todo.findAll(Sort.by(“order”)).list()); } Benchmark scenario To make the test result more efficient and fair, we’ve followed the Techempower guidelines such as conducting multiple scenarios, running on bare metal, and containers on Kubernetes. Here is the same test scenario for the 3 applications (blocking, reactive, and virtual threads), as shown in Figure 1. Fetch all rows from a DB (quotes) Add one quote to the returned list Sort the list Return the list as JSON Figure 1: Performance test architecture Response Time and Throughput During the performance test, we’ve increased the concurrency level from 1200 to 4400 requests per second. As you expected, the virtual thread scaled better than worker threads (traditional blocking services) in terms of response time and throughput. More importantly, it didn’t outperform the reactive service all the time. When the concurrent level reached 3500 requests per second, the virtual threads went way slower and lower than the worker threads. Figure 2: Response time and throughput Resource Usage (CPU and RSS) When you design a concurrent application regardless of cloud deployment, you or your IT Ops team need to estimate the resource utilization and capacity along with high scalability. The CPU and RSS (resident set size) usage is a key metric to measure resource utilization. With that, when the concurrency level reached out to 2000 requests per second in CPU and Memory usage, the virtual threads turned rapidly higher than the worker threads. Figure 3: Resource usage (CPU and RSS) Memory Usage: Container Container runtimes (e.g., Kubernetes) are inevitable to run concurrent applications with high scalability, resiliency, and elasticity on the cloud. The virtual threads had lower memory usage inside the limited container environment than the worker thread. Figure 4: Memory usage – Container Conclusion You learned how the virtual threads performed in multiple environments in terms of response time, throughput, resource usage, and container runtimes. The virtual threads seem to be better than the blocking services on the worker threads all the time. But when you look at the performance metrics carefully, the measured performance went down than the blocking services at some concurrent levels. On the other hand, the reactive services on the event loops were always higher performed than both the virtual and worker threads all the time. Thus, the virtual thread can provide high enough performance and resource efficiency based on your concurrency goal. Of course, the virtual thread is still quite simple to develop concurrent applications without a steep learning curve as the reactive programming.
Our Excel spreadsheets hold a lot of valuable data in their dozens, hundreds, or even thousands of cells and rows. With that much clean, formatted digital data at our disposal, it’s up to us to find programmatic methods for extracting and sharing that data among other important documents in our file ecosystem. Thankfully, Microsoft made that extremely easy to do when they switched their file representation standard over to OpenXML more than 15 years ago. This open-source XML-based approach drastically improved the accessibility of all Office document contents by basing their structure on well-known technologies – namely Zip and XML – which most software developers intimately understand. Before that, Excel (XLS) files were stored in a binary file format known as BIFF (Binary Interchange File Format), and other proprietary binary formats were used to represent additional Office files like Word (DOC). This change to an open document standard made it possible for developers to build applications that could interact directly with Office documents in meaningful ways. To get information about the structure of a particular Excel workbook, for example, a developer could write code to access xl/workbook.xml in the XLSX file structure and get all the workbook metadata they need. Similarly, to get specific sheet data, they could access xl/worksheets/(sheetname).xml, knowing that each cell and value with that sheet will be represented by simple <c> and <v> elements with all their relevant data nested within. This is a bit of an oversimplification, but it serves to point out the ease of navigating a series of zipped XML file paths. Given the global popularity of Excel files, building (or simply expanding) applications to load, manipulate, and extract content from XLSX was a no-brainer. There are dozens of examples of modern applications that can seamlessly load & manipulate XLSX files, and many even provide the option to export files in XLSX format. When we set out to build our applications to interact with Excel documents, we have several options at our disposal. We can elect to write our code to sift through OpenXML document formatting, or we can download a specialized programming library, or we can alternatively call a specially designed web API to take care of a specific document interaction on our behalf. The former two options can help us keep our code localized, but they’ll chew up a good amount of keyboard time and prove a little more costly to run. With the latter option, we can offload our coding and processing overhead to an external service, reaping all the benefits with a fraction of the hassle. Perhaps most beneficially, we can use APIs to save time and rapidly get our application prototypes off the ground. Demonstration In the remainder of this article, I’ll quickly demonstrate two free-to-use web APIs that allow us to retrieve content from specific cells in our XLSX spreadsheets in slightly different ways. Ready-to-run Java code is available below to make structuring our calls straightforward. Both APIs will return information about our target cell, including the cell path, cell text value, cell identifier, cell style index, and the formula (if any) used within that cell. With the information in our response object, we can subsequently ask our applications to share data between spreadsheets and other open standard files for myriad purposes. Conveniently, both API requests can be authorized with the same free API key. It’s also important to note that both APIs process file data in memory and release that data upon completion of the request. This makes both requests fast and extremely secure. The first of these two API solutions will locate the data we want using the row index and cell index in our request. The second solution will instead use the cell identifier (i.e., A1, B1, C1, etc.) for the same purpose. While cell index and cell identifier are often regarded interchangeably (both locate a specific cell in a specific location within an Excel worksheet), using the cell index can make it easier for our application to adapt dynamically to any changes within our document, while the cell identifier will always remain static. To use these APIs, we’ll start by installing the SDK with Maven. We can first add a reference to the repository in pom.xml: XML <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> We can then add a reference to the dependency in pom.xml: XML <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> With installation out of the way, we can structure our request parameters and use ready-to-run Java code examples to make our API calls. To retrieve cell data using the row index and cell index, we can format our request parameters like the application/JSON example below: JSON { “InputFileBytes”: “string”, “InputFileUrl”: “string”, “WorksheetToQuery”: { “Path”: “string”, “WorksheetName”: “string” }, “RowIndex”: 0, “CellIndex”: 0 } And we can use the below code to call the API once our parameters are set: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.EditDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication(“Apikey”); Apikey.setApiKey(“YOUR API KEY”); // Uncomment the following line to set a prefix for the API key, e.g. “Token” (defaults to null) //Apikey.setApiKeyPrefix(“Token”); EditDocumentApi apiInstance = new EditDocumentApi(); GetXlsxCellRequest input = new GetXlsxCellRequest(); // GetXlsxCellRequest | Document input request try { GetXlsxCellResponse result = apiInstance.editDocumentXlsxGetCellByIndex(input); System.out.println(result); } catch (ApiException e) { System.err.println(“Exception when calling EditDocumentApi#editDocumentXlsxGetCellByIndex”); e.printStackTrace(); } To retrieve cell data using the cell identifier, we can format our request parameters like the application/JSON example below: JSON { “InputFileBytes”: “string”, “InputFileUrl”: “string”, “WorksheetToQuery”: { “Path”: “string”, “WorksheetName”: “string” }, “CellIdentifier”: “string” } We can use the final code examples below to structure our API call once our parameters are set: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.EditDocumentApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication(“Apikey”); Apikey.setApiKey(“YOUR API KEY”); // Uncomment the following line to set a prefix for the API key, e.g. “Token” (defaults to null) //Apikey.setApiKeyPrefix(“Token”); EditDocumentApi apiInstance = new EditDocumentApi(); GetXlsxCellByIdentifierRequest input = new GetXlsxCellByIdentifierRequest(); // GetXlsxCellByIdentifierRequest | Document input request try { GetXlsxCellByIdentifierResponse result = apiInstance.editDocumentXlsxGetCellByIdentifier(input); System.out.println(result); } catch (ApiException e) { System.err.println(“Exception when calling EditDocumentApi#editDocumentXlsxGetCellByIdentifier”); e.printStackTrace(); } That’s all the code we’ll need. With utility APIs at our disposal, we’ll have our projects up and running in no time.
In previous posts, we have seen how to implement distributed configuration and service discovery in Spring Cloud. In this article, we will discuss how to implement REST calls between microservices. We will implement REST clients with OpenFeign software. Services Communication There are two main styles of service communication: synchronous and asynchronous. The asynchronous type involves calls where the thread making the call does not wait for a response from the server and is not blocked. A typical protocol used for such scenarios is AMQP. This protocol involves messaging architectures, with message brokers like RabbitMQ and Apache Kafka. Asynchronous calls are also possible with REST protocol. The Spring Boot WebClient technology, available in the Spring Web Reactive module, provides that. Nevertheless, in this article, we are not discussing this communication style. We will talk about synchronous REST calls instead. The technology we are going to describe for doing synchronous REST calls is the OpenFeign package. OpenFeign Feign is a former Netflix component, then moved to the open-source community as OpenFeign. With OpenFeign, we can implement REST clients in a declarative way. This has some analogy with Spring Data repositories. We will define the interfaces with REST method definitions, and the framework will generate the client part under the hood. The stack we are using in this article is: Spring Boot: 3.2.1 Spring Cloud: 2023.0.0 release train. Java 17 REST Clients With OpenFeign: Basic Configuration To enable OpenFeign for a Spring Boot project, we have to add the following dependency in the pom.xml file: <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-openfeign</artifactId> </dependency> The next step would be to annotate the Spring configuration class with the @EnableFeignClients annotation: Java @SpringBootApplication @EnableDiscoveryClient @EnableFeignClients public class AppMain { public static void main(String[] args) { SpringApplication.run(AppMain.class, args); } } We can add several parameters to this annotation. For instance, we can specify base packages and explicitly list the client implementations: Java @EnableFeignClients(basePackages = {“com.codingstrain.springcloud.sample.libraryapp.books.client”}, clients = { AuthorClient.class, ReviewClient.class }) This allows us to avoid unnecessary scanning of packages and classes. As seen in the next section, we can also define a configuration class by the defaultConfiguration parameter. REST Clients With OpenFeign: Customization OpenFeign is made of a set of sub-components defined by specific interfaces. It is distributed with a default set of implementations of those interfaces. We can customize this set and override the implementations by specifying a configuration class in the @EnableFeignClients annotation: Java @SpringBootApplication @EnableDiscoveryClient @EnableFeignClients(defaultConfiguration = BookClientConfiguration.class) public class AppMain { public static void main(String[] args) { SpringApplication.run(AppMain.class, args); } } Some of the default implementations of the OpenFeign components are: Decoder: The default implementation is ResponseEntityEncoder. Encoder: The default implementation is SpringEncoder. Logger: The default implementation is Slf4jLogger. Contract: The default implementation is SpringMvcContract. It serves the purpose of providing annotation processing. Client: According to the documentation if the Spring Cloud Load Balancer library is on the classpath the FeignBlockingLoadBalancerClient is used. Spring Cloud Load Balancer is included in the spring-cloud-dependencies release train 2023.0.0 described in this article. We can override one or more components by redefining the related beans in the configuration class: Java @Configuration public class BookClientConfiguration { @Bean public Contract feignContract() { return new SpringMvcContract(); } … } We can also configure OpenFeign programmatically by using a Feign builder: Java Feign.builder() .client(new ApacheHttpClient()) .build(); A further possibility is to customize the configuration by properties files. For instance, we can specify a specific service name like in the following example: YAML feing: client: config: author-service: connectTimeout: 5000 If we set “default” instead of “author-service”, the above setting will be valid for all services. REST Clients With OpenFeign: Client Interfaces Client Definitions To define the REST clients for the application, we have to write their interfaces with the appropriate mappings. It is up to the framework to provide their implementations at runtime. To inform the framework that those interfaces are meant to be OpenFeign clients, we must use the @FeignClient annotation: Java @FeignClient(name = “author-service”) public interface AuthorClient { @GetMapping(“/author/{name}”) public Optional<Author> findByName(@PathVariable(“name”) String name); } @FeignClient(name = “review-service”) public interface ReviewClient { @GetMapping(“/review/{bookTitle}”) public List<Review> findByBookTitle(@PathVariable(“bookTitle”) String bookTitle); } OpenFeign can automatically interact with a discovery server. If we configure our application to use Eureka, for example, the name parameter above would represent the actual name of a service in the discovery registry. An alternative would be to specify an explicit URL by the url parameter. Making Use of Inheritance There is a way to write cleaner code for the OpenFeign clients by using inheritance. We can define all the methods and mappings in an interface: Java public interface AuthorRESTService { @GetMapping(“/author/{name}”) public Optional<Author> findByName(@PathVariable(“name”) String name); } Then we can implement that interface in a controller. This way we can inherit the mapping annotations from the above interface: Java @RestController @RequestMapping(“/library”) public class AuthorController implements AuthorRESTService { @Autowired private AuthorService authorService; @Override public Optional<Author> findByName(@PathVariable(“name”) String name) { return authorService.findByName(name); } } As for the client, we can extend the interface and there’s no need to define any method at all. We just need to annotate the method with @FeignClient, and provide the service name: Java @FeignClient(name = “author-service”) public interface AuthorClient extends AuthorRESTService { } REST Clients With OpenFeign: Using the Clients We can use the REST client implementations by simply injecting the above-defined interfaces. For instance, we can inject them into a Spring service component: Java @Service(“bookService”) public class BookService { Logger logger = LoggerFactory.getLogger(BookService.class); @Autowired private AuthorClient authorClient; @Autowired private ReviewClient reviewClient; @Autowired private BookRepository bookRepository; public BookInfo findBookInfoByTitle(@RequestParam(“authorName”) String authorName, @RequestParam(“bookTitle”) String bookTitle) { Optional<Author> author = authorClient.findByName(authorName); List<Review> reviews = reviewClient.findByBookTitle(bookTitle); BookInfo bookInfo = new BookInfo(); bookInfo.setAuthorBiography(author.get() .getBiography()); bookInfo.setAuthorName(authorName); List<String> reviewContents = reviews.stream() .map(item -> item.getContent()) .collect(Collectors.toList()); bookInfo.setTitle(bookTitle); bookInfo.setBookReviews(reviewContents); return bookInfo; } … } In the above example, the authorClient and reviewClient OpenFeign clients are used to extract some information related to a book item. Then, the information is returned by a BookInfo object. Having defined the Spring service this way, we can then use it inside a controller: Java @RestController @RequestMapping(“/library”) public class BookController { Logger logger = LoggerFactory.getLogger(BookService.class); @Autowired private BookService bookService; @GetMapping(value = “/bookInfo”, params = { “authorName”, “bookTitle” }) public BookInfo findBookInfoByTitle(@RequestParam(“authorName”) String authorName, @RequestParam(“bookTitle”) String bookTitle) { return bookService.findBookInfoByTitle(authorName, bookTitle); } … } Running an Example The code from this article is available on GitHub. The logic behind the example is very simple. We have a microservice named book-service representing the backend of an application that allows browsing books. From a specific book, the book-service microservice can get the author’s biography and book reviews by the author’s name and book title. This information is fetched by calling two other services, author-service and review-service. The set of modules includes a Eureka discovery service: libraryapp-discovery-server: A Eureka discovery server libraryapp-discovery-authors: Allows to get book author biography libraryapp-discovery-reviews: Allows to get book reviews libraryapp-discovery-books: Uses the author and review service to get the author’s biography and reviews of a specific book The single modules register themselves on the discovery registry by the following piece of configuration in the yaml configuration file: YAML eureka: client: serviceUrl: defaultZone: http://myusername:mypassword@localhost:8760/eureka/ We can compile the modules and launch all the instances by the following commands: discovery-service: “java – jar libraryapp-discovery-server-1.0-SNAPSHOT.jar” author-service: “java – jar libraryapp-discovery-authors-1.0-SNAPSHOT.jar” review-service: “java – jar libraryapp-discovery-reviews-1.0-SNAPSHOT.jar” book-service: “java – jar libraryapp-discovery-books-1.0-SNAPSHOT.jar” Just to see how traffic is distributed, we can launch two instances of author-service, overriding the port setting: java – jar libraryapp-discovery-authors-1.0-SNAPSHOT.jar –PORT=8084 java – jar libraryapp-discovery-authors-1.0-SNAPSHOT.jar –PORT=8085 We expect the traffic toward author-service to be equally balanced between the two instances, in a round-robin fashion, which is the default. Conclusion A core feature of microservice architectures is the remote communication between services. In this article, we have discussed the synchronous REST scenario. Spring Cloud is gradually shifting its all stack, abandoning the Netflix OSS components. Feign is still there, as its open community OpenFeign counterpart, and still offers a valid and highly configurable solution.
Java, as a programming language, has evolved over the years, introducing new features and improvements to enhance developer productivity and code readability. With the release of Java 14, one notable feature is the introduction of records as a language feature, offering a concise way to define immutable data-carrying classes. If you have been using Lombok to reduce boilerplate code in your Java classes, it’s worth considering migrating to records for a more native and standardized approach. In this article, we will explore the process of migrating Java code with Lombok to Java records, using practical examples. Why Migrate From Lombok to Records? Lombok has been widely adopted in the Java community for its ability to reduce verbosity by automatically generating getters, setters, constructors, and other repetitive code. While Lombok is effective, the introduction of records provides a standardized and built-in solution for defining immutable data classes. Records offer better integration with the language and are supported natively by various tools and frameworks. Migrating Getters and Setters Lombok Example Java import lombok.Data; @Data public class Movie { private String title; private int releaseYear; } Record Example Java public record Movie(String title, int releaseYear) { } In the record example, we define a Movie class with two fields (title and releaseYear) in the constructor parameter list. The compiler automatically generates the constructor, equals(), hashCode(), and toString() methods, which are similar to what Lombok would generate. Migrating Constructors Lombok Example Java import lombok.AllArgsConstructor; import lombok.NoArgsConstructor; @AllArgsConstructor @NoArgsConstructor public class Series { private String title; private int startYear; private int endYear; } Record Example Java public record Series(String title, int startYear, int endYear) { } Records automatically generate a compact and expressive constructor that initializes all fields. In the Series record example, the constructor takes three parameters corresponding to the fields title, startYear, and endYear. Immutable Records Lombok Example Java import lombok.Value; @Value public class Actor { private String name; private int birthYear; } Record Example Java public record Actor(String name, int birthYear) { } Records inherently provide immutability, as all fields are marked as final by default. In the record example for the Actor class, the name and birthYear fields are immutable, and no setters are generated. Handling Default Values Sometimes, it’s necessary to handle default values for certain fields. In Lombok, this can be achieved using @Builder or custom methods. With records, default values can be set directly in the constructor. Lombok Example Java import lombok.Builder; @Builder public class Film { private String title; private String director; private int releaseYear; } Record Example Java public record Film(String title, String director, int releaseYear) { public Film { if (Objects.isNull(title)) { this.title = “Unknown Title”; } if (Objects.isNull(director)) { this.director = “Unknown Director”; } } } In the record example for the Film class, default values are set directly in the constructor body. If the title or director is null, default values are assigned. Using Builder With Lombok Java import lombok.Builder; @Builder public class FilmWithLombok { private String title; private String director; private int releaseYear; } // Example of using the builder: FilmWithLombok film = FilmWithLombok .builder() .title(“Inception”) .director(“Christopher Nolan”) .releaseYear(2010) .build(); In the Lombok example, the @Builder annotation generates a builder class for the FilmWithLombok class. The builder provides a fluent API for constructing instances with optional and chainable setter methods. Using Builder With Java Record Java public record FilmWithRecord(String title, String director, int releaseYear) { public static class Builder { private String title; private String director; private int releaseYear; public Builder title(String title) { this.title = title; return this; } public Builder director(String director) { this.director = director; return this; } public Builder releaseYear(int releaseYear) { this.releaseYear = releaseYear; return this; } public FilmWithRecord build() { return new FilmWithRecord(title, director, releaseYear); } } } // Example of using the builder: FilmWithRecord film = new FilmWithRecord .Builder() .title(“The Dark Knight”) .director(“Christopher Nolan”) .releaseYear(2008) .build(); For Java records, we create a static nested Builder class within the record. The builder class has methods for setting each field and a build method to create an instance of the record. This provides a similar fluent API as seen in the Lombok example.Using builders with records or Lombok provides a convenient and readable way to construct instances, especially when dealing with classes with multiple fields. Choose the approach that aligns with your preferences and project requirements. Conclusion Migrating from Lombok to Java records is a step towards leveraging native language features for better code maintainability and readability. Records provide a standardized and concise way to define immutable data classes, eliminating the need for additional libraries like Lombok. By following the examples provided in this article, developers can seamlessly transition their code to benefit from the latest language features in Java 14+. Remember to update your build configuration and dependencies accordingly, and enjoy the enhanced expressiveness and simplicity offered by Java records.
In the most recent updates to Java, the String class has undergone a series of significant method additions. Certain methods now yield instances of the Stream class, while some of them are Higher Order functions. The intention behind incorporating these methods is to offer a streamlined approach for handling strings in a stream-oriented manner. Handling strings in a stream-oriented manner brings the advantage of simplifying code and enhancing expressiveness. This makes it easier to apply operations like filtering, mapping, reduction, and more. Another advantage is that the Stream API enables parallel processing, allowing the utilization of parallel streams with these methods that return streams. This makes it possible to leverage multicore processors for the efficient handling of large strings. This article delves into a few methods within the String class that enable processing to be conducted in a functional programming manner. # chars (): The ‘chars ()’ method facilitates effective character management in Java by returning an IntStream. This IntStream represents a sequence of Integer values, each corresponding to the Unicode code point of the characters within the provided string. A code point is a numeric identifier assigned to a character in the Unicode standard, serving the purpose of character encoding. Let’s understand the ‘chars ()’ by an example. Write a program that removes the given character from the string. Let’s explore tackling this challenge through an imperative, non-functional approach, avoiding the use of the chars() or stream methodology. private static String removeChar(String input, char c){ StringBuilder sb = new StringBuilder(); char[] charArray = input.toCharArray(); for (char ch : charArray) { if (ch != c) { sb.append(ch); } } return sb.toString(); } Let’s compare this with the functional approach: private static String removeChar(String str, char c){ return str.chars() .filter(ch -> ch != c) .mapToObj(ch -> String.valueOf((char) ch)) .collect(Collectors.joining()); } The imperative, non-functional approach involves traditional iteration over the characters using a StringBuilder to build the modified string. On the other hand, the functional approach leverages the chars() method and the Stream API, providing a more concise and expressive solution. # transform ( ): The transform function is a higher-order function that accepts Function as an argument. The transform function offers a more concise and functional way to apply transformations to the string. The transform function can be used in chaining transformations in string for example, consider a scenario where you want to clean and format user input entered in a form. Users might input their names with extra white spaces, mixed capitalization, and unnecessary characters. String chaining can be employed to standardize and clean up this input. String userInput = ” JoHN-dOe “; String cleanedInput = userInput .transform(String :: trim) .transform(String :: toLowerCase) .transform(user -> user.replaceAll(“-“, “”)); cleanedInput // johndoe # lines (): The lines function returns a stream of lines extracted from the given string, separated by line terminators such as n, r, and rn. The Java String lines() method proves advantageous over the split() method due to its lazy element supply and faster detection of line terminators. In cases where the string is empty, the lines’ function returns zero lines. String text = “The lines function returns a stream of lines extracted ,nThe Java String lines() method proves advantageous ;nIn cases where the string is empty,n” + “the lines’ function returns zero lines.”; text.lines() .map(String :: toUpperCase) .filter(line -> line.contains(“I”)) .forEach(System.out::println); The ‘text’ string contains multiple lines of text. We use the lines() method to obtain a stream of lines from the text. We then use the map operation to convert each line to the uppercase. The filter operation is applied to keep only the lines containing the letter ‘I’, and the forEach operation prints the modified lines. The functions explained provide a powerful and concise way to work with strings. They offer a functional approach by leveraging streams for efficient manipulation and filtering, promoting immutability. Chaining these functions with other stream operations allows for complex yet concise transformations, promoting a cleaner and more functional style.
In the dynamic landscape of modern application development, efficient and seamless interaction with databases is paramount. HarperDB, with its NoSQL capabilities, provides a robust solution for developers. To streamline this interaction, the HarperDB SDK for Java offers a convenient interface for integrating Java applications with HarperDB. This article is a comprehensive guide to getting started with the HarperDB SDK for Java. Whether you’re a seasoned developer or just diving into the world of databases, this SDK aims to simplify the complexities of database management, allowing you to focus on HarperDB’s NoSQL features. Motivation for Using HarperDB SDK Before delving into the intricacies of the SDK, let’s explore the motivations behind its usage. The SDK is designed to provide a straightforward pathway for Java applications to communicate with HarperDB via HTTP requests. By abstracting away the complexities of raw HTTP interactions, developers can concentrate on leveraging the NoSQL capabilities of HarperDB without dealing with the intricacies of manual HTTP requests. In the fast-paced realm of software development, time is a precious resource. The HarperDB SDK for Java is a time-saving solution designed to accelerate the integration of Java applications with HarperDB. Rather than reinventing the wheel by manually crafting HTTP requests and managing the intricacies of communication with HarperDB, the SDK provides a high-level interface that streamlines these operations. By abstracting away the complexities of low-level HTTP interactions, developers can focus their efforts on building robust applications and leveraging the powerful NoSQL capabilities of HarperDB. It expedites the development process and enhances code maintainability, allowing developers to allocate more time to core business logic and innovation. The motivation for utilizing HTTP as the communication protocol between Java applications and HarperDB is rooted in efficiency, security, and performance considerations. While SQL is a widely adopted language for querying and managing relational databases, the RESTful HTTP interface provided by HarperDB offers distinct advantages. The purpose of this guide is to shed light on the functionality of HarperDB in the context of supported SQL operations. It’s essential to note that the SQL parser within HarperDB is an evolving feature, and not all SQL functionalities may be fully optimized or utilize indexes. As a result, the REST interface emerges as a more stable, secure, and performant option for interacting with data. The RESTful nature of HTTP communication aligns with modern development practices, providing a scalable and straightforward approach to data interaction. The stability and security inherent in the RESTful architecture make it an attractive choice for integrating Java applications with HarperDB. While the SQL functionality in HarperDB can benefit administrative ad-hoc querying and leveraging existing SQL statements, the guide emphasizes the advantages of the RESTful HTTP interface for day-to-day data operations. As features and functionality evolve, the guide will be updated to reflect the latest capabilities of HarperDB. The motivation for using the HarperDB SDK and opting for HTTP communication lies in the quest for efficiency, security, and a more streamlined development experience. This guide aims to empower developers to make informed choices and harness the full potential of HarperDB’s NoSQL capabilities while navigating the evolving landscape of SQL functionality. We understand the motivation behind employing the HarperDB SDK for Java and choosing HTTP as the communication protocol, which lays a solid foundation for an efficient and streamlined development process. The SDK is a valuable tool to save time and simplify complex interactions with HarperDB, allowing developers to focus on innovation rather than the intricacies of low-level communication. As we embark on the hands-on session on the following topic, we will delve into practical examples and guide you through integrating the SDK into your Java project. Let’s dive into the hands-on session to bring theory into practice and unlock the full potential of HarperDB for your Java applications. Hands-On Session: Building a Simple Java SE Application with HarperDB In this hands-on session, we’ll guide you through creating a simple Java SE application that performs CRUD operations using the HarperDB SDK. Before we begin, ensure you have a running instance of HarperDB. For simplicity, we’ll use a Docker instance with the following command: Shell docker run -d -e HDB_ADMIN_USERNAME=root -e HDB_ADMIN_PASSWORD=password -e HTTP_THREADS=4 -p 9925:9925 -p 9926:9926 harperdb/harperdb This command sets up a HarperDB instance with a root username and password for administration. The instance will be accessible on ports 9925 and 9926. Now, let’s proceed with building our Java application. We’ll focus on CRUD operations for a straightforward entity—Beer. Throughout this session, we’ll demonstrate the seamless integration of the HarperDB SDK into a Java project. To kickstart our project, we’ll create a Maven project and include the necessary dependencies—HarperDB SDK for Java and DataFaker for generating beer data. Create a Maven Project Open your preferred IDE or use the command line to create a new Maven project. If you’re using an IDE, there is typically an option to create a new Maven project. If you’re using the command line, you can use the following command: Shell mvn archetype:generate -DgroupId=com.example -DartifactId=harperdb-demo -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false Replace com.example with your desired package name and harperdb-demo with the name of your project. Include dependencies in pom.xml: Open the pom.xml file in your project and include the following dependencies: XML <dependencies> <dependency> <groupId>expert.os.harpderdb</groupId> <artifactId>harpderdb-core</artifactId> <version>0.0.1</version> </dependency> <dependency> <groupId>net.datafaker</groupId> <artifactId>datafaker</artifactId> <version>2.0.2</version> </dependency> </dependencies> Create the Beer Entity In your src/main/java/com/example directory, create a new Java file named Beer.java. Define the Beer entity as a record, taking advantage of the immutability provided by records. Additionally, include a static factory method to create a Beer instance using DataFaker: Java package com.example; import net.datafaker.Faker; public record Beer(String id, String name, String style, String brand) { static Beer of(Faker faker) { String id = faker.idNumber().valid(); String name = faker.beer().name(); String style = faker.beer().style(); String brand = faker.beer().brand(); return new Beer(id, name, style, brand); } } With these initial steps, you’ve set up a Maven project, included the required dependencies, and defined a simple immutable Beer entity using a record. The next phase involves leveraging the HarperDB SDK to perform CRUD operations with this entity, showcasing the seamless integration between Java and HarperDB. Let’s proceed to implement the interaction with HarperDB in the subsequent steps of our hands-on session. The Server and Template classes are fundamental components of the HarperDB SDK for Java, providing a seamless interface for integrating Java applications with HarperDB’s NoSQL database capabilities. Let’s delve into the purpose and functionality of each class. Server Class The Server class is the entry point for connecting with a HarperDB instance. It encapsulates operations related to server configuration, database creation, schema definition, table creation, and more. Using the ServerBuilder, users can easily set up the connection details, including the host URL and authentication credentials. Key features of the Server class: Database management: Create, delete, and manage databases. Schema definition: Define schemas within databases. Table operations: Create tables with specified attributes. Credential configuration: Set up authentication credentials for secure access. Template Class The Template class is a high-level abstraction for performing CRUD (Create, Read, Update, Delete) operations on Java entities within HarperDB. It leverages Jackson’s JSON serialization to convert Java objects to JSON, facilitating seamless communication with HarperDB via HTTP requests. Key features of the Template class: Entity operations: Perform CRUD operations on Java entities. ID-based retrieval: Retrieve entities by their unique identifiers. Integration with Server: Utilize a configured Server instance for database interaction. Type-Safe operations: Benefit from type safety when working with Java entities. Together, the Server and Template classes provide a robust foundation for developers to integrate their Java applications with HarperDB effortlessly. In the subsequent sections, we’ll explore practical code examples to illustrate the usage of these classes in real-world scenarios, showcasing the simplicity and power of the HarperDB SDK for Java. Let’s delve into the code and discover the capabilities these classes bring to your Java projects. In this session, we’ll execute a comprehensive code example to demonstrate the functionality of the HarperDB SDK for Java. The code below showcases a practical scenario where we create a database, define a table, insert a beer entity, retrieve it by ID, delete it, and then confirm its absence. Java public static void main(String[] args) { // Create a Faker instance for generating test data Faker faker = new Faker(); // Configure HarperDB server with credentials Server server = ServerBuilder.of(“http://localhost:9925”) .withCredentials(“root”, “password”); // Create a database and table server.createDatabase(“beers”); server.createTable(“beer”).id(“id”).database(“beers”); // Obtain a Template instance for the “beers” database Template template = server.template(“beers”); // Generate a random beer entity Beer beer = Beer.of(faker); // Insert the beer entity into the “beer” table template.insert(beer); // Retrieve the beer by its ID and print it template.findById(Beer.class, beer.id()).ifPresent(System.out::println); // Delete the beer entity by its ID template.delete(Beer.class, beer.id()); // Attempt to retrieve the deleted beer and print a message template.findById(Beer.class, beer.id()) .ifPresentOrElse( System.out::println, () -> System.out.println(“Beer not found after deletion”) ); } Explanation of the code: Faker instance: We use the Faker library to generate random test data, including the details of a beer entity. Server configuration: The Server instance is configured with the HarperDB server’s URL and authentication credentials (username: root, password: password). Database and table creation: We create a database named “beers” and define a table within it named “beer” with an “id” attribute. Template instance: The Template instance is obtained from the configured server, specifically for the “beers” database. Beer entity operations: Insertion: A randomly generated beer entity is inserted into the “beer” table. Retrieval: The inserted beer is retrieved by its ID and printed. Deletion: The beer entity is deleted by its ID. Confirmation of deletion: We attempt to retrieve the deleted beer entity and print a message confirming its absence. This code provides a hands-on exploration of the core CRUD operations supported by the HarperDB SDK for Java. By running this code, you’ll witness the seamless integration of Java applications with HarperDB, making database interactions straightforward and efficient. Let’s execute and observe the SDK in action! In this hands-on session, we executed a concise yet comprehensive code example that showcased the power and simplicity of the HarperDB SDK for Java. By creating a database, defining a table, and manipulating beer entities, we explored the SDK’s capability to integrate Java applications with HarperDB’s NoSQL features seamlessly. The demonstrated operations, including insertion, retrieval, and deletion, underscored the SDK’s user-friendly approach to handling CRUD functionalities. This session offered a practical glimpse into the ease of use and effectiveness of the HarperDB SDK to Java developers, making database interactions a seamless part of application development. As we proceed, we’ll delve deeper into more advanced features and scenarios, building on this foundation to empower developers in leveraging HarperDB’s capabilities within their Java projects. Conclusion In conclusion, this article has thoroughly explored the HarperDB SDK for Java, showcasing its capabilities in simplifying the integration of Java applications with HarperDB’s NoSQL database. From understanding the core classes like Server and Template to executing practical CRUD operations with a sample beer entity, we’ve witnessed the user-friendly nature of the SDK. By choosing the HarperDB SDK, developers can streamline database interactions, focusing more on application logic and less on intricate database configurations. For those eager to dive deeper, the accompanying GitHub repository contains the complete source code used in the hands-on session. Explore, experiment, and adapt the code to your specific use cases. Additionally, the official HarperDB Documentation serves as an invaluable resource, offering in-depth insights into the NoSQL operations API, making it an excellent reference for further exploration. As you embark on your journey with HarperDB and Java, remember that this SDK empowers developers, providing a robust and efficient bridge between Java applications and HarperDB’s NoSQL capabilities. Whether you’re building a small-scale project or a large-scale enterprise application, the HarperDB SDK for Java stands ready to enhance your development experience.
Nicolas Fränkel
Head of Developer Advocacy,
Api7
Shai Almog
OSS Hacker, Developer Advocate and Entrepreneur,
Codename One
Marco Behler
Ram Lakshmanan
yCrash – Chief Architect
ABOUT US
ADVERTISE
CONTRIBUTE ON DZONE
LEGAL
CONTACT US
Let’s be friends:
Recent Comments