Is your ServiceNow MID Server running slow during integrations? 🤔 The issue might be high Java I/O wait times. But don’t worry! In this article, we’ll show you how to find and fix these issues. Discover simple tips to improve the performance of your ServiceNow integrations. Ready to get started? Let’s dive in!
Impact of High Java I/O Wait Times
High Java I/O wait times in applications can lead to several issues:
Slow Performance
Frequent or prolonged I/O wait times can slow down your application. This happens because the JVM pauses while waiting for I/O operations to complete, causing delays.
Freezes and Timeouts
Long I/O wait times can make the application momentarily freeze. In systems like a ServiceNow MID server, this might lead to timeouts and missed requests, affecting service reliability.
High Resource Usage
Excessive I/O operations can consume more CPU and memory resources. The JVM expends more effort managing these operations, placing additional load on system resources.
Hard to Diagnose
Intermittent I/O waits can be difficult to diagnose.. These waits may not occur frequently, but can be tricky to identify without detailed logs and thorough analysis.
Scalability Issues
Inefficient I/O management can significantly hinder how efficiently your application scales. As I/O handling becomes less effective, managing more requests or larger amounts of data can become increasingly difficult. This inefficiency can lead to slow response times and decreased overall performance, making it difficult for the application to meet growing demands.
Simulating High Java I/O Wait Times in MID Server
The Java program given below simulates high I/O wait times by performing intensive file read/write operations:
import java.io.BufferedWriter;
import java.io.FileWriter;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.logging.Level;
import java.util.logging.Logger;
public class HighIOWaitSimulator {
private static final Logger LOGGER = Logger.getLogger(HighIOWaitSimulator.class.getName());
private static final String FILE_PATH = "io_wait_simulation.txt";
private static final int ITERATIONS = 35;
private static final int LINES = 100000;
public static void main(String[] args) {
LOGGER.info("Starting High I/O Wait Simulator...");
long startTime = System.currentTimeMillis();
for (int i = 0; i < ITERATIONS; i++) {
long iterationStartTime = System.currentTimeMillis();
LOGGER.info("Iteration " + (i + 1) + " of " + ITERATIONS + ": Performing I/O operations...");
simulateHighIOWait();
long iterationEndTime = System.currentTimeMillis();
LOGGER.info("Iteration " + (i + 1) + " of " + ITERATIONS + " completed in " + (iterationEndTime - iterationStartTime) + " ms");
}
long endTime = System.currentTimeMillis();
LOGGER.info("High I/O Wait Simulator finished. Total execution time: " + (endTime - startTime) + " ms");
}
private static void simulateHighIOWait() {
try (BufferedWriter writer = new BufferedWriter(new FileWriter(FILE_PATH, true))) {
for (int i = 0; i < LINES; i++) {
writer.write("Simulating high I/O wait time...\n");
}
} catch (IOException e) {
LOGGER.log(Level.SEVERE, "Failed to write to file: ", e);
} finally {
try {
Files.readAllLines(Paths.get(FILE_PATH));
} catch (IOException e) {
LOGGER.log(Level.SEVERE, "Failed to read from file: ", e);
}
}
}
}
The program HighIOWaitSimulator is designed to simulate high I/O wait times by performing intensive read-write operations on a file. It writes a specified number of lines to a file (io_wait_simulation.txt) and repeatedly reads the same file over a series of iterations (35 in this case). The program starts with an informative log message to track the progress and time taken for each iteration, providing insights into the I/O operations. The simulateHighIOWait method handles the actual writing and reading operations, logging any I/O exceptions encountered. Finally, the program logs the total execution time once all iterations are complete.
Simulation of High Java I/O Wait Times in ServiceNow MID server
Let’s run our application on the ServiceNow MID Server and see how high Java I/O Wait Times look. First, create a JAR file from the program using the command:
jar cf HighIOWaitSimulator.jar HighIOWaitSimulator.class
Once a JAR file is created, let’s upload and run this program in the ServiceNow MID Server as documented in the MID Server setup guide. This guide provides a detailed walkthrough on how to run a custom Java application in the ServiceNow MID Server infrastructure. The steps are as follows:
- Creating a ServiceNow application
- Installing MID Server on AWS EC2 instance
- Configuring MID Server
- Installing Java application within MID Server
- Running Java application from MID server
We strongly encourage you to review this guide if you are not sure on how to run custom Java applications in ServiceNow MID server infrastructure.
yCrash’s High Java I/O Wait Times Analysis
High I/O wait times can significantly be a factor contributing to the issues caused by consecutive Full GCs in your application. Consecutive Full GCs indicate frequent garbage collection cycles, which can worsen issues such as intermittent OutOfMemoryErrors, degraded response times, and high CPU consumption.These extended garbage collection cycles often require reading and writing data to disk. If the I/O subsystem is slow, these operations take even longer, causing extended application pauses.

Monitoring the Thread Count Summary can help optimize your application’s performance by identifying bottlenecks and resource utilization. With 18 threads in the RUNNABLE state, most threads are actively processing tasks, which is generally a good sign. The two WAITING threads indicate that some operations might be waiting for I/O or other resources, which could potentially be optimized to improve throughput. Reviewing the states of these threads can help pinpoint inefficient code or synchronization issues. By understanding these thread states, you can make targeted improvements to reduce latency and enhance overall efficiency.
The thread count and stack trace information from yCrash are invaluable for identifying potential performance bottlenecks in our application. With 10 threads showing identical stack traces, we can pinpoint repetitive tasks that may indicate inefficiencies. Multiple threads in the RUNNABLE state performing similar operations, such as reading lines from a file, highlights areas for optimization to reduce I/O wait times. The single WAITING thread indicates issues with resource contention or inefficient waiting mechanisms that need to be addressed. Overall, this detailed information enables us to make targeted performance improvements, thereby enhancing the application’s efficiency. You can see the detailed report here.
Conclusion
In summary, yCrash’s analysis provides valuable insights into thread behavior within our application. The tool’s detailed thread count and stack trace analysis help identify potential performance bottlenecks and inefficiencies, such as repetitive tasks and resource contention. Additionally, yCrash provides visibility into the states of different threads, making it easier to pinpoint areas that may require optimization, such as reducing I/O wait times or addressing inefficient waiting mechanisms. The comprehensive reporting features aid in understanding the root issues affecting our application’s performance. If you want to diagnose performance problems in your application using yCrash, you may register here.

Share your Thoughts!