Hello everyone! I’m excited to share our journey of building EventFly, a comprehensive event management platform. We created this system as part of our Software Development Lab course in Level 4, Term 1 at Bangladesh University of Engineering and Technology (BUET). Our small team of just six members had only two months to complete this ambitious project. As the lead architect, I handled backend development, DevOps responsibilities, and the overall system architecture.
EventFly helps people create and manage events of all sizes. We designed it to be a complete solution that handles everything from event creation to participant engagement. An organizer can set up an event, sell tickets, send announcements, and collect feedback—all in one platform.
Our platform includes features that solve real problems for event organizers:
Our team of six had clear roles that aligned with our architecture:
This small team size influenced many of our architectural decisions, as we needed an approach that would maximize our productivity.
Choosing between a monolithic architecture and microservices was our first major decision. Let me explain why we went with microservices in simple terms.
A monolith is like a single large apartment where everything is connected. Microservices are more like a housing complex with separate units that communicate with each other.
The Monolith Option:
The Microservices Option:
We chose microservices for several practical reasons:
Team Autonomy: Even with only six members, we wanted developers to work on separate services without stepping on each other’s toes. This was crucial for our tight two-month timeline.
Technology Requirements: Our analytics features needed Python for machine learning capabilities, while our core services were built with Node.js. Microservices allowed this mixed technology approach.
Learning Opportunity: As a university project, we wanted to gain experience with modern architectural patterns used in industry.
Feature Independence: Some features like payments needed to be extremely reliable, while others like the newsfeed could tolerate occasional issues. Microservices let us apply different reliability standards to different components.
I remember explaining it to our professor: “With only two months and six people, we need an architecture that lets everyone work productively in parallel. Microservices will let us divide the work cleanly while learning valuable industry practices.”
We embraced Domain-Driven Design (DDD) principles to guide our microservices architecture. This approach helped us create services that aligned with business capabilities rather than technical concerns.
We began with event storming sessions where we mapped out the entire business domain on a whiteboard using colorful sticky notes:
This visual exercise helped us identify natural boundaries in our system. We could clearly see which operations and data belonged together, and which were separate concerns.
From our event storming sessions, we identified distinct bounded contexts—areas of the system with their own consistent terminology and rules. These became our microservices:
Each bounded context had its own ubiquitous language—a consistent set of terms used by both developers and business stakeholders. For example, in the Event Management context, we used terms like “organizer,” “venue,” and “schedule.” In the Payment context, we used “transaction,” “refund,” and “payment method.”
For more on Domain-Driven Design, I recommend the article “Domain-Driven Design: Tackling Complexity in the Heart of Software” by Martin Fowler, which greatly influenced our approach.
Based on our DDD analysis, we divided EventFly into seven core services:
Each service had its own MongoDB database and communicated through a NATS Streaming server for asynchronous messaging. For synchronous communication, services exposed REST APIs.
Our frontend consisted of two Next.js applications:
Let me share some of the more interesting technical aspects of our implementation.
Our deployment architecture was designed for scalability and resilience, utilizing Kubernetes for orchestration:
We containerized each service using Docker. This gave us several benefits:
Our docker-compose.yaml file defined the entire development environment, making it easy for any team member to run the complete system locally:
version: "3.9"
services:
nats_server:
image: nats-streaming
ports:
- "4222:4222"
auth:
build: ./auth
ports:
- "3000:3000"
environment:
- NATS_CLIENT_ID=auth
org:
build: ./org
ports:
- "3001:3000"
environment:
- NATS_CLIENT_ID=org
# Other services followed the same pattern
For a deeper understanding of containerization in microservices, the article “Production-Ready Docker Packaging” was very helpful to us.
We used an event-driven architecture for communication between services. When something important happened in one service, it published an event to NATS Streaming. Other services subscribed to relevant events and updated their own data accordingly.
For example, when a new event was created:
// In the Organization service
natsWrapper.client.publish('event:created', JSON.stringify({
id: event.id,
name: event.name,
organizer: event.organizer,
start_date: event.start_date,
end_date: event.end_date,
// other event properties
}));
// In the Events service
class EventCreatedListener extends Listener {
subject = 'event:created';
queueGroupName = 'event-created-event';
async onMessage(data, msg) {
console.log('Event Created! Data: ', data);
const event = Event.build({
name: data.name,
organizer: data.organizer,
// other properties
});
await event.save();
msg.ack();
}
}
This pattern helped us maintain loose coupling between services while ensuring data consistency across the system. The article “Event-Driven Architecture” by Martin Fowler provided excellent guidance in this area.
One unique challenge was integrating Python-based machine learning algorithms with our Node.js services. We solved this by creating a hybrid service:
// In our Analytics service (Node.js)
const runPythonScript = (events, participantData) => {
const python = spawn('python3', [
'recommender.py',
JSON.stringify(events),
JSON.stringify(participantData)
]);
python.stdout.on('data', (data) => {
const recommendations = JSON.parse(data.toString());
return recommendations;
});
};
This approach allowed us to use Python’s rich data science libraries while maintaining consistency with our overall architecture.
With a microservices architecture, having a solid CI/CD pipeline was essential. We set up GitHub Actions to automate our development workflow:
Here’s a simplified example of our GitHub Actions workflow:
name: Deploy to Dev Environment
on:
push:
branches: [ feature/* ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- run: cd auth && npm install && npm test
# Similar steps for other changed services
build-and-deploy:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build Docker images
run: docker-compose build
- name: Push to registry
run: docker-compose push
- name: Deploy to Dev
run: kubectl apply -f infra/k8s/dev
This automation was crucial for our small team, as it freed us from manual deployment tasks and provided consistent quality checks. The article “CI/CD for Microservices on Kubernetes” helped us design this workflow.
Building a microservices system with just six people in two months presented unique challenges:
With seven services and only six team members, we had to be strategic. We prioritized:
The lesson: Even with microservices, you need to balance service count with team capacity. We could have combined some services, but the clear boundaries were worth the extra effort.
With separate databases for each service, keeping data consistent was challenging. We couldn’t implement complex distributed transaction systems in our timeframe.
Our solution was a pragmatic approach to eventual consistency:
I explained it to our team using a simple analogy: “It’s like how universities handle student information. The registrar, accounting, and library departments all have your data. When you change your address, it might take time for all departments to update their records.”
For more insights on managing data in distributed systems, “Data Consistency in Microservices Architecture” was invaluable.
With only one person (me) handling DevOps, we needed to keep operations simple but effective.
We implemented:
The key was automation. Everything from testing to deployment was automated, allowing our small team to focus on development rather than operations.
Despite the challenges, microservices significantly improved our team’s productivity:
With our team split between frontend and backend, microservices allowed everyone to work productively:
This independence was crucial for meeting our two-month deadline. We didn’t need to coordinate deployments or worry about breaking each other’s code.
Microservices allowed us to use different technologies where appropriate:
This flexibility let us choose the right tool for each job rather than forcing a one-size-fits-all approach.
Each service had a primary owner who was responsible for its design and implementation. This created:
For example, our analytics specialist became an expert in recommendation algorithms, while our payment specialist mastered secure transaction processing.
As the sole architect in our six-person team, I had to balance hands-on development with architectural guidance:
I created:
These standards gave the team a consistent starting point while allowing flexibility for specific service needs.
Unlike in larger teams where an architect might focus exclusively on design, I was both architect and developer:
This hands-on approach let me identify and address cross-cutting concerns early. When I noticed teams implementing similar logging patterns differently, I created a shared logging library that standardized the approach across services.
In our small team, knowledge sharing was vital. I established:
These practices ensured that although each person had primary ownership of specific services, everyone understood the overall system.
Building EventFly as part of our university course provided unique insights:
Right-sized services: We initially planned nine services but realized that was too many for our team size. Combining some related functionality into seven services was more manageable.
Documentation is essential: With different people owning different services, good documentation became our shared language. Services with thorough documentation were integrated more easily.
Start with infrastructure: Setting up a solid development environment and CI/CD pipeline early saved countless hours later. Automation is worth the initial investment.
Domain-driven design works: Taking time to understand the business domain before writing code led to more intuitive service boundaries and better system design.
Balance theory and pragmatism: While we followed microservices best practices where possible, we also made pragmatic compromises when needed for our timeline.
Completing EventFly as part of our Software Development Lab at BUET provided both advantages and constraints:
Advantages:
Constraints:
Despite these constraints, we built a working system that demonstrated both technical excellence and practical usability. Our professors were particularly impressed with how we applied microservices concepts in a real-world project with such a small team.
Building EventFly using microservices was the right choice for our specific context. The architecture allowed our small team to:
For small teams considering microservices, I recommend:
Our experience shows that microservices aren’t just for large teams. With careful planning and the right tools, even a small team of six university students can successfully implement a microservices architecture in a tight timeframe.
If you’re interested in exploring these concepts more deeply, here are some resources that helped us:
“Building Microservices” by Sam Newman - The definitive guide that shaped our overall approach
“Domain-Driven Design Distilled” by Vaughn Vernon - Helped us apply DDD principles to our microservices architecture
“Designing Data-Intensive Applications” by Martin Kleppmann - Invaluable for understanding data consistency challenges
“The DevOps Handbook” by Gene Kim et al. - Guided our CI/CD implementation
“Monolith to Microservices” by Sam Newman - Provided patterns for establishing service boundaries
These resources provided the theoretical foundation that we translated into practical implementation in our EventFly project.