Building a Scalable Logging Service: Akka Lightbend vs. Kafka Confluent

YISUSVII
6 min readSep 2, 2024

--

In today’s complex distributed systems, effective logging is critical for monitoring, debugging, and ensuring system reliability. For organizations that manage multiple applications across various environments, creating a centralized logging service that can handle different log levels — such as critical, warning, and info — is essential.

In this article, we will explore two powerful technologies for building a scalable logging service: Akka Lightbend and Kafka Confluent. Both offer unique advantages and are suitable for different use cases. We’ll compare them in terms of architecture, data flow, processing, log level handling, and integration, helping you decide which is the right choice for your organization.

Why Logging Matters

Logging serves as the lifeblood of any robust system, providing visibility into what’s happening under the hood. Whether you’re tracking critical errors, warnings, or general information, a good logging service ensures you can react quickly to issues, maintain system health, and improve performance over time.

As your applications scale, so too must your logging infrastructure. This is where Akka and Kafka come into play, offering scalable solutions tailored to different needs.

Akka Lightbend: Actor-Based Logging

Architecture

Akka Lightbend is built around the actor model, which excels in distributed systems by allowing you to build highly concurrent and fault-tolerant applications. In an Akka-based logging service:

  • Scalability: Akka provides both vertical and horizontal scalability across nodes.
  • Concurrency: Akka’s actor model handles concurrent logging events efficiently.
  • Fault Tolerance: Akka’s supervision strategies ensure your logging service remains resilient.

Data Flow and Processing

In Akka, logs can be collected asynchronously by actors, which then send them to central actors for aggregation and processing. Depending on the log level, different actors can handle logs differently. For example:

  • Critical logs might trigger immediate alerts.
  • Warning logs could be reviewed periodically.
  • Info logs could be stored for long-term analysis.

With Akka Persistence, you can store logs in databases or distributed storage for future reference, ensuring your logs are both durable and accessible.

Log Levels and Filtering

Log levels can be implemented by defining different actors or actor hierarchies. Each actor can be responsible for filtering and processing logs based on their severity:

  • Critical: Immediate processing and alerting.
  • Warning: Processing and periodic review.
  • Info: Logging and storage for analysis.

Integration and Ecosystem

Akka integrates seamlessly with JVM-based systems, making it an ideal choice for organizations already invested in the JVM ecosystem. Lightbend offers commercial support, ensuring you have the resources you need to scale and maintain your logging service.

Example GitHub Repo

To get started with Akka for logging, check out this example repository:

Kafka Confluent: Distributed Streaming for Logs

Architecture

Kafka is a distributed streaming platform that excels in handling high-throughput, real-time data streams. In a Kafka-based logging service:

  • Scalability: Kafka scales horizontally across brokers to handle large volumes of log data.
  • Concurrency: Kafka’s partitioning model enables parallel log processing.
  • Fault Tolerance: Kafka’s replication and failover capabilities ensure reliable log persistence.

Data Flow and Processing

Logs can be produced to Kafka topics by various applications. Each log level — critical, warning, and info — can have its own topic, ensuring organized log management. Kafka Streams or other processing frameworks can then filter and process these logs in real-time.

  • Critical logs: Can be processed immediately for alerts.
  • Warning logs: Can be filtered and reviewed periodically.
  • Info logs: Can be stored in Kafka topics with a configurable retention period.

Kafka’s exactly-once semantics ensure that your logs are delivered reliably, avoiding duplicates or lost entries.

Log Levels and Filtering

In Kafka, log levels are typically managed by assigning them to different topics:

  • Critical: logs_critical
  • Warning: logs_warning
  • Info: logs_info

Consumers can subscribe to the topics they need, filtering logs based on their importance or relevance.

Integration and Ecosystem

Kafka’s broad ecosystem and integrations make it a versatile choice for diverse environments. Whether your applications are cloud-native, on-premises, or hybrid, Kafka’s flexibility ensures seamless integration.

Example GitHub Repo

To get started with Kafka for logging, explore this example repository:

Choosing the Right Tool for the Job

Akka Lightbend is a great choice if your existing systems are JVM-based and you need fine-grained control over concurrency and fault tolerance. It’s especially suitable for more complex, actor-based architectures that require tight integration within the JVM ecosystem.

Kafka Confluent, on the other hand, is ideal if you need a highly scalable, distributed logging system that can handle large volumes of data with strong durability guarantees. Kafka’s integration capabilities make it a powerful choice for enterprise-level logging across diverse platforms.

Ultimately, the right choice depends on your specific requirements, existing infrastructure, and the level of integration you need.

References

DEMO TIME!!!

Akka Lightbend: Simple Logging Service

This example demonstrates how to create a basic logging service using Akka, where different log levels are handled by different actors.

// build.sbt
name := "AkkaLoggingService"
version := "0.1"
scalaVersion := "2.13.8"

libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % "2.6.20",
"com.typesafe.akka" %% "akka-stream" % "2.6.20"
)

// src/main/scala/LoggingService.scala
import akka.actor.{Actor, ActorRef, ActorSystem, Props}
import akka.stream.ActorMaterializer

// Log Messages
sealed trait LogMessage
case class CriticalLog(message: String) extends LogMessage
case class WarningLog(message: String) extends LogMessage
case class InfoLog(message: String) extends LogMessage

// Logger Actor
class LoggerActor extends Actor {
override def receive: Receive = {
case CriticalLog(message) =>
println(s"[CRITICAL]: $message")
case WarningLog(message) =>
println(s"[WARNING]: $message")
case InfoLog(message) =>
println(s"[INFO]: $message")
}
}

object LoggingService extends App {
implicit val system: ActorSystem = ActorSystem("LoggingService")
implicit val materializer: ActorMaterializer = ActorMaterializer()

val logger: ActorRef = system.actorOf(Props[LoggerActor], "logger")

// Simulate logging messages
logger ! CriticalLog("Critical error occurred!")
logger ! WarningLog("This is a warning.")
logger ! InfoLog("Just some information.")

// Shutdown the system
system.terminate()
}

Explanation:

  • The LoggerActor handles three types of logs: CriticalLog, WarningLog, and InfoLog.
  • Each log level triggers a different print statement.
  • The LoggingService object initializes the Akka actor system and sends some log messages to the LoggerActor.

Kafka Confluent: Simple Logging Service

This example demonstrates how to create a basic logging service using Kafka, where different log levels are sent to different Kafka topics.

// build.gradle
plugins {
id 'java'
}

repositories {
mavenCentral()
}

dependencies {
implementation 'org.apache.kafka:kafka-clients:3.0.0'
}

// src/main/java/LoggingService.java
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;

import java.util.Properties;

public class LoggingService {

private static KafkaProducer<String, String> producer;

public static void main(String[] args) {
// Kafka producer configuration
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());

producer = new KafkaProducer<>(props);

// Send logs to different topics based on their level
sendLog("logs_critical", "Critical error occurred!");
sendLog("logs_warning", "This is a warning.");
sendLog("logs_info", "Just some information.");

// Close producer
producer.close();
}

private static void sendLog(String topic, String message) {
ProducerRecord<String, String> record = new ProducerRecord<>(topic, message);
producer.send(record);
}
}

Explanation:

  • The LoggingService class uses Kafka's KafkaProducer to send log messages to different topics (logs_critical, logs_warning, logs_info).
  • The logs are routed to different topics based on their severity.

Running the Examples

Akka Example:

  1. Ensure you have Scala and sbt installed.
  2. Run the Akka project using sbt run.
  3. You should see the log messages printed to the console.

Kafka Example:

  1. Ensure you have Kafka installed and running on localhost:9092.
  2. Compile and run the Kafka project using gradle run.
  3. The logs will be sent to the respective Kafka topics.

These examples provide a simple starting point. In a real-world scenario, you would likely add more features, such as log aggregation, alerting, and persistence.

Author Bio:

YISUSVII is a Staff Site Reliability Engineer and Sr Cloud Specialist with 10 years of experience in designing and managing large-scale distributed systems. Passionate about cloud technologies, he is also a party lover, dreamer, and storyteller.

--

--

YISUSVII
YISUSVII

Written by YISUSVII

Staff Site Reliability Engineer, Sr Cloud Specialist, Party and Dog Lover, Dreamer & Storytaller

No responses yet