Building an Automation Management System with PLUG
Automations not only speed up tasks but also save a significant amount of resources. Automations reduce stress for individuals by automating their manual tasks.
As an Enterprise Software, there will be thousands of moving parts. Making all of them run smoothly is the tough part. We ensure not only that the core product runs smoothly, but also that all the supporting automations run without any friction.
As the software grows, the number of automations that are written around the core product grows significantly. Many times, all of them are scattered, rewritten, and multiple engineers waste time writing the same automations, and many do not even know that such things exist, which can help their tasks speed up in the first place.
We faced similar issues a few months back, so we came up with a solution called PLUG (Plugin Layer for Unified Governance). It’s a Platform where an engineer can build both simple and complex automations and share them across.
We wanted this platform to support multiple programming languages. Especially Node.js & Python.
Whenever an Engineer asks why to build on PLUG? I give them the following answers:
- 🔧 Build & deploy modular, independent plugins with minimal overhead
- 🧩 Access built-in services: ✉️ Email, 📜 Logging, ☁️ AWS SDKs, 🤖 OpenAI APIs, and more
- ⚙️ Spin up UIs fast — decoupled and self-contained
- 🚀 Offload heavy automation by deploying it to a server — no need to run it locally
- 🕒 Schedule cron jobs and run event-driven tasks with ease
- 📦 Centralized plugin registry to promote reuse & avoid duplicated logic
- 🫴 And most importantly, share it across and save other engineers' time.
Some of the other benefits are
- ✅ Build independent components (both UI and Backend) that can be integrated with the dashboard.
- ✅ Single PR for both UI and Backend changes, quick to review & deploy.
- ✅ Plugins can use the Database for critical tasks that require storage. (The PLUG platform uses SQLite)
To achieve the above points, we need a robust architecture. We came up with an architecture where an engineer can easily build, deploy, and maintain in the long run.
Before I show you the complete architecture, let me provide you with a high-level overview. The image below shows what we need:
Here P1, P2,…Pn are Plugins that an engineer develops. Here, lily-com is an SDK that holds core locus functionalities and third-party packages.
Bootstrap generator helps engineers to generate the required directories & files automatically for a new plugin. So, this removes a lot of overhead, and the engineer can start working directly to work on the business logic of the plugin instead of thinking of setting up the infrastructure. This template system just accelerates plugin creation by generating boilerplate.
Standardization benefits
This standardization means:
- Predictable plugin structure
- Easier code review
- Faster onboarding
- Consistent patterns across plugins
An engineer developing a Plugin needs to follow just 4 steps:
- Run the scaffold command
- Add business logic to controllers/services
- Register routes in routes/plug.ts
- Deploy
The infrastructure is pre-configured.
The entire PLUG platform is within AWS EC2. Below is a high-level overview of how it is managed inside the server
Let’s deep dive into the architecture:
PLUG uses a multi-process architecture under PM2, isolating concerns and enabling independent scaling. We have TypeScript, Python, Agents, and Sidecar servers running on different ports.
This separation isolates failures and allows independent scaling. If a plugin crashes in the Python server, TypeScript plugins are unaffected. PM2 provides health monitoring, auto-restart, and zero-downtime reloads.
We do have an SSO-based authentication mechanism implemented on the PLUG Platform.
Multi-Stack SSO orchestration: Dynamic environment routing
PLUG handles multiple stacks (Devo, Prod, Demo) from a single platform using dynamic middleware injection for stack differentiation.
Dynamic middleware injection
We use dynamic middleware that changes at runtime based on the stack. When a user logs in via SSO, the system:
- Detects the requested stack
- Cleans existing auth middleware
- Injects stack-specific Auth0 configuration
- Configures client IDs, secrets, and audience per stack
This enables a single codebase to support multiple authentication realms without multiple deployments.
Service discovery with GitHub-based configuration
Stack configuration is fetched from GitHub and cached in SQLite for performance. The sidecar's ServiceProvider class:
- Reads base config from GitHub
- Extracts service discovery paths
- Fetches environment-specific API endpoints
- Caches responses in SQLite with a fallback
This keeps endpoint configuration external to code and enables environment-specific routing.
Token-based stack differentiation
JWT tokens include audience claims. The system:
- Decodes JWT to extract audience
- Maps the audience to the stack
- Sets the stackInfo cookie for subsequent requests
- Routes API calls to the correct stack endpoints
Cron management: declarative job orchestration
PLUG uses a JSON-based cron configuration system that supports both traditional crontab and systemd services. We leverage our EC2 to manage cron efficiently within.
JSON job definition schema
Jobs are defined declaratively:
The cron_manager service:
- Parses JSON from multiple client-specific files
- Generates crontab entries for simple jobs
- Creates systemd service files for long-running jobs
- Manages service lifecycle (enable, start, reload)
- Writes a unified crontab
This enables version control for scheduled jobs and removes manual crontab editing.
Evolution from deterministic automation to intelligent agents
Traditional automation follows fixed rules. PLUG extends this with intelligent agents that use AI to understand context and adapt, placing intelligence within the platform.
Architectural foundation: Dedicated Agent Process
PLUG hosts agents on a dedicated process. This separation provides isolation, specialized optimization. Agents use the same shared infrastructure (sidecar services, authentication, logging) but run in their own execution context.
This isolates AI workloads from deterministic automation. Agents can have different resource profiles and failure boundaries while still sharing core platform services.
We are adding more agents to it. For now, we have around 2 of them
Insight Agent: automated diagnostics and root cause analysis
The Insight Agent analyzes build failures to produce explanations. It fetches logs (Jenkins, application logs), analyzes patterns, and generates concise summaries.
Architecturally, it processes multi-source data streams, includes token management and chunking for large inputs, routes notifications based on context, and runs asynchronously to avoid blocking requests.
For engineers, it reduces debugging and fix time.
Smart Filter: natural language to structured queries
I built first initial prototype of smart filtering on this platform.
The Smart Filter converts natural language into validated database queries. It translates user intent (e.g., “orders from yesterday over 5kg”) into structured filters that match database schemas.
Key elements: structured outputs (guaranteed schema compliance), few-shot learning from domain examples, temporal context injection, and domain-specific constraints and mappings. The agent also tracks token usage and cost for each query.
This enables product teams to offer natural-language search without training users on query syntax.
Now this feature is moved into our core product, ready to handle a great number of natural language queries.
Sidecar pattern: service abstraction layer
The Metaphor
A sidecar attaches to a motorcycle and provides extra functionality (storage, passenger seat) without modifying the motorcycle. In software, the sidecar pattern attaches a helper process to the main application to extend capabilities without changing the core logic.
The sidecar pattern separates:
- Business logic (main application) from
- Infrastructure concerns (logging, monitoring, security, service discovery)
The sidecar pattern abstracts shared services. Plugins call the sidecar over HTTP rather than managing credentials or dependencies directly.
Why Use Sidecars?
- Decoupling: Main app stays focused on business logic
- Reusability: Sidecar services can serve multiple applications
- Isolation: Sidecar failures don't crash the main application
- Specialization: Use the right technology for infrastructure tasks
- Independent scaling: Scale the sidecar independently
Let us understand this with an example,
We do have an email service that will help engineers get notified when the automation is complete. (We have extended this to have Slack service as well)
The sidecar communicates via HTTP REST APIs. The main application makes HTTP requests to the sidecar, which handles infrastructure operations.
Request Flow:
- Plugin needs to send an email
- Plugin makes HTTP POST to https://SIDECAR_BASE_URL/send-email
- Sidecar processes the request (uses SMTP, credentials)
- Sidecar returns a success/failure response
- Plugin continues with business logic
Key Characteristics
- Process Isolation: Sidecar runs as a separate process
- Network Communication: HTTP/HTTPS between processes
- Language Agnostic: Different languages can call the sidecar
- No Direct Coupling: Plugin doesn't import sidecar code
The sidecar pattern in PLUG provides:
- Separation: Business logic is separate from infrastructure
- Security: Centralized credential management
- Maintainability: Update services in one place
- Reusability: Shared services across all plugins
- Flexibility: Language-agnostic APIs
- Reliability: Isolated failures, better error handling
This pattern lets the engineer focus on business logic while the sidecar handles infrastructure concerns. The sidecar acts as a reusable service layer, avoiding duplication and reducing operational complexity.
For engineers: the sidecar is a microservice that provides infrastructure-as-a-service to all plugins. For product managers: it reduces development time, improves security, and centralizes maintenance, contributing directly to faster delivery and lower operational risk.
The sidecar pattern transforms infrastructure from a burden into a platform capability, enabling teams to build automations faster while maintaining security, reliability, and maintainability
The Bottom Line
PLUG provides technical leverage by solving infrastructure problems once. It centralizes authentication, credentials, logging, scheduling, and deployment, allowing teams to focus on business logic.
The architecture delivers:
- Reduced time-to-automation-delivery: Template system and shared services
- Improved reliability: Isolated processes, health monitoring, error handling
- Better security: Centralized credential management, JWT validation, domain filtering
- Lower costs: Resource sharing, server-side execution, service pooling
- Enhanced observability: Structured logging, request tracing, error tracking
This structure means automations become reusable assets, not one-off scripts. The platform becomes more valuable as more automations are added.
For engineers and product managers, PLUG transforms automation management from a source of technical debt into a strategic advantage. It is an investment in infrastructure that compounds over time.
Zoltan Bacso
Masters in Computer Applications/data analytics
2wImpressive
Marketing @ Locus.sh | Editorial, Last Mile & SC Logistics
2wCongrats, Prajwal Haniya! My favorite bit was the one shared toolkit so teams don’t rewrite the same automation twice. Looks like a big time saver 😊