Containerizing ClearPath: How I Dockerized a Time-Blindness Task Planner for Production & GitOps

PIN

This morning, I tackled turning our college project — ClearPath Task Planner — into a containerized web app. It’s a full-stack app built to help users struggling with time blindness to stay on track. With a React frontend, Express backend, and MongoDB database, it was a solid app. I want to let potential employers see what’s being developed, but the project was built using an old bitbucket pipeline and was mothballed after the semester. Therefore, I’ll be deploying a version to the k8s cluster.

Here’s what I did, and what I learned.


Architecture at a Glance

ClearPath is a 3-teir web app. The Frontend uses React (TypeScript) served with nginx. The Backend uses Node and Express API. The Database is the free tier MongoDB Atlas.

My goal was simple: wrap this up in containers, attempt best practices in security and performance, and lay the foundation for a Kubernetes deployment.


Backend Dockerfile: Security-First and Lean

I started with the backend. Here’s what the final Dockerfile includes:

  • Base: node:18-alpine (lightweight, secure)
  • Dependencies: Only production ones via npm ci --only=production
  • User security: Runs as a non-root user
  • Health check: Built-in /api/home?userId=health-check ping for Kubernetes

I added a custom health check that accepts both 200 and 400 responses (because the route expects a user ID). This will give Kubernetes a signal it needs without faking full login logic.


Frontend Dockerfile: Multi-Stage Perfection

React frontend needed something tighter. I used a multi-stage build:

  1. Build Stage: Runs npm ci && npm run build
  2. Production Stage: Uses nginx:alpine to serve assets

I included:

  • SPA routing via try_files
  • Static file caching (1 year!)
  • Security headers (CSP, XSS protection)
  • Gzip compression
  • Proxy to backend API

I also hit a snag with nginx’s default user/group setup. My fix: skip group creation and just create a non-root user that inherits from the existing nginx group. Works cleanly now.


Docker Compose

For local development, I tied it all together using docker-compose.yml. Some highlights:

  • Health checks with depends_on orchestration
  • Explicit ports (3000 for frontend, 5000 for backend)
  • External .env file for secrets (not baked into the image)
  • Restart policies, dependency graph, and logs—all streamlined

Debugging & Cleanup

Containerizing ClearPath came with its fair share of bumps. I discovered that the frontend’s package.json had some backend-specific dependencies like Mongoose and MongoDB, which were unnecessary. Cleaning that up meant also refactoring imports in files like TaskContext.tsx and replacing backend-specific logic — such as mongoose.Types.ObjectId() — with frontend-safe alternatives like uuidv4().

Next, I ran into TypeScript issues with Axios: error responses were typed as unknown, leading to compilation errors. I resolved it with a safe fallback using (error.response.data as any)?.message to avoid type assertion pitfalls. I also had to modernize our Jest tests, as jest.useFakeTimers('modern') had been deprecated — a quick switch to jest.useFakeTimers() fixed that.

One subtle but frustrating issue came from nginx: the Alpine base image already defines an nginx group, so trying to add it manually caused conflicts. Instead, I opted to create a non-root user that inherited the existing group, which turned out to be a clean and secure workaround.

Lastly, our health check endpoint for the backend initially failed because it required a userId parameter. I updated the health check to call /api/home?userId=health-check and treated both 200 and 400 responses as valid, since the endpoint works as long as the app is responsive. Each of these fixes improved not just the containerization process, but the quality and reliability of the entire codebase.


Security, Performance, Observability

This wasn’t just about “make it run in Docker”. I hardened it.

Both the frontend and backend containers were configured to run as non-root users, minimizing the risk of privilege escalation if a container were ever compromised. Secrets such as database credentials and session tokens were kept out of the Docker images entirely by using environment variables loaded at runtime, ensuring sensitive data remains outside of version control and build artifacts.

On the frontend, I configured nginx to enforce security headers like Content-Security-Policy, X-Content-Type-Options, and X-XSS-Protection to defend against common web vulnerabilities. The nginx server also benefits from a hardened Alpine base image, which reduces the attack surface by minimizing unnecessary software.

By combining these practices with health checks and tight dependency controls, the app is now significantly more secure and aligned with container security best practices.


Ready for GitOps + Kubernetes

The containers are now ready to be pushed to a registry and deployed with GitOps (Flux). The workflow is standard:

  1. Build/tag images
  2. Push to registry
  3. GitOps tool detects image tag change
  4. Kubernetes manifests update
  5. Cluster deploys automatically

Final Thoughts

When I first started this process, it was mainly to display the project for potential employers. When I got started and started run into issues, I decided to pivot. Let’s containerize it and use this to try and develop best practices such as:

  • Container security best practices
  • Observability readiness (healthchecks, logs)
  • Local dev + future-ready GitOps deployment
  • Real bug fixes and tech debt cleanup

It’s clean, secure, fast, and production-worthy. I’m proud of this little planner.

Next up….let’s deploy it to the cluster!