Building a Distributed Job Scheduler from Scratch (Part 2)

A multipart hands on series about building a distributed job scheduler from scratch.


4 min read


Welcome back to the second part of our tutorial series on building a distributed job scheduler! In our previous installment, we laid the foundation by defining the functional and non-functional requirements of our job scheduler. Now, it's time to dive into the heart of our system by designing a durable storage system to store job details effectively. If you're a software engineer eager to learn new technologies, this tutorial is tailored just for you.

Modeling Job Class

Since we have already figured out what are the various Job types and the ways to configure callbacks, our actual job has become quite easier.

public abstract class Job implements Serializable {
    String id;
    // The actual HTTP url where callback will be made.
    String callbackUrl;
    int successStatusCode;
    long relevancyWindow;
    // Defines the maximum window for callback execution
    TimeUnit relevancyWindowTimeUnit;

public class ExactlyOnceJob extends Job {
    LocalDateTime dateTime;

public class RecurringJob extends Job {
    List<LocalDateTime> dateTimes;

public class RepeatedJob extends Job {
    LocalDateTime startTime;
    LocalDateTime endTime;
    TimeUnit repeatIntervalTimeUnit;
    long repeatInterval;


Before figuring out the database, let's figure out the query patterns.

  • Store job details - We need high write throughput to store structured data. Additionally, we must be prepared for possible schema changes in the future.

  • Get job details provided an ID - High read throughput to get details of a job based on a key.

  • No transaction guarantees are required.

  • No range scans are required.

Considering these requirements, we can choose a NoSQL database like Cassandra or HBase. For this tutorial, we'll leverage Apache HBase due to its capabilities.

Hello, HBase!

If you are new to the world of HBase, I would recommend you to read this crisp and excellent article which would give you a fair idea of the HBase data model.

Installing HBase is a 5-minute affair and can be completed relatively easily. Just go through this link.

Now it's time to write some boilerplate utility code to interact with our newly created HBase Server.

public class HBaseManager {

    private Admin admin;
    private Connection connection;

    public HBaseManager() throws IOException {
        Configuration config = HBaseConfiguration.create();
        String path = Objects.requireNonNull(this.getClass().getClassLoader().getResource("hbase-site.xml"))
        config.addResource(new Path(path));
        connection = ConnectionFactory.createConnection(config);
        admin = connection.getAdmin();

    public boolean tableExists(String name) throws IOException {
        TableName table = TableName.valueOf(name);
        return admin.tableExists(table);

    public void createTable(String name, String columnFamily) throws IOException {
        if (!tableExists(name)) {
            TableName table = TableName.valueOf(name);
            HTableDescriptor descriptor = new HTableDescriptor(table);
            descriptor.addFamily(new HColumnDescriptor(columnFamily));

    public Table getTable(String name) throws IOException {
        TableName tableName = TableName.valueOf(name);
        return connection.getTable(tableName);

    public void put(Table table, Put value) throws IOException {

    public Result get(Table table, String id) throws IOException {
        Get key = new Get(Bytes.toBytes(id));
        return table.get(key);

Now that our utility code is in place, we can proceed to create a Data Access Object (DAO) layer responsible for storing and retrieving job details.

 public class JobDAO {
    HBaseManager hBaseManager;
    String columnFamily = "cf";
    String data = "data";
    String tableName = "jobDetails";
    Table table;

    public JobDAO() throws IOException {
        hBaseManager = new HBaseManager();
        hBaseManager.createTable(tableName, columnFamily);
        table = hBaseManager.getTable(tableName);

    public void registerJob(Job job) throws IOException {
        byte[] row = Bytes.toBytes(job.getId());
        Put put = new Put(row);
        put.addColumn(columnFamily.getBytes(), data.getBytes(), SerializationUtils.serialize(job));
        hBaseManager.put(table, put);

    public Job getJobDetails(String id) throws IOException {
        Result result = hBaseManager.get(table, id);
        byte[] value = result.getValue(columnFamily.getBytes(), data.getBytes());
        Job job = (Job) SerializationUtils.deserialize(value);
        return job;

To ensure the functionality of our system, we'll rely on JUnit tests to validate our code. This step is crucial to confirm that our storage system works as expected.

public class JobDAOTest {
    public void testRegisterJob() throws IOException {
        JobDAO jobDAO = new JobDAO();
        String id = UUID.randomUUID().toString();
        ExactlyOnceJob exactlyOnceJob = ExactlyOnceJob.builder()
        Assertions.assertDoesNotThrow(() -> jobDAO.registerJob(exactlyOnceJob));
        ExactlyOnceJob job = (ExactlyOnceJob) jobDAO.getJobDetails(id);


Project Structure

Maven pom.xml

<project xmlns=""



Congratulations! In this second part of our tutorial series, we've made significant progress. We've chosen a suitable database (HBase), implemented the necessary code, and validated it through test cases. But the journey doesn't end here. In the next installment, which we'll cover in part 3, we'll delve into modeling repeated jobs. Do take a pause and think about why they need to be modeled separately. Stay tuned for more exciting insights!


Did you find this article valuable?

Support Snehasish Roy by becoming a sponsor. Any amount is appreciated!