Medlife is an Indian ecommerce company in the healthcare space.
Currently, we are the #1 player in the ePharma space.
We have 4 verticals namely
1. Medlife.com which deals with pharma products,
2. Medlife Labs which is an aggregator for preventive & pathological tests,
3. Doctor e-consultation wherein the Customers can take an appointment and consult the Doctors,
4. PinHealth.com which deals with NonPharma and some of the OTC products, pinhealth also deals with supplements in the form of private branding.
Entry Point:
App by name “Medlife” available at Google Play & iOS App Store.
www.medlife.com & www.pinhealth.com
In-bound call center
Just like any other ecommerce business, works 24x7x365 days in an year. Uptime and performance is very important.
Medlife, as a company was founded sometime Nov 2014,
Plans of scaling high, we didn’t spend time on setting up on-premises infrastructure,
We were right on the AWS cloud, in short, we were born in AWS.
We had our first ever production deployment done on AWS during May 2015
We delivered our first order to a customer around the same time frame.
Like many Indian startups around that time, AWS Singapore was the only choice.
1st stage is about the very early days of Medlife.
2nd stage is about the improvisation and automation in order to manage things in a better manner with less chaos.
3rd stage is about the migration activity that we did from Singapore to Mumbai.
4th stage is about how we had to optimise and align ourselves in order to manage the growth and the scale.
5th stage is about how we would like to take Medlife to the next level.
This state is about the roots of Medlife. Seeds being sowed at this stage.
2-tier monolith application deployed on an Apache Tomcat, reverse-proxied by nginx with MongoDB as the database
Everything on t2.medium instance type. Even the MongoDB was installed on a t2.medium instance
All instances on the default VPC that AWS provides. Not much best practices in place.
Not much of optimization. One size larger than what is required. Higher payload.
Deployment done using shell scripts during off-peak hours. Lot of room for manual errors.
1. Any issues in the backend, FE will go down.
2. t2.medium’s credits would get exhausted.
3. We had a lot of traffic from a locations that we don’t even operate.
4. High data transfer costs due to unoptimized code.
5. Handling the configuration changes was becoming way too tedious and any changes in the configurations, we had to bounce the servers which was kind of not good.
Need of the hour to see how well we can improvise and bring in more automation in order to manage things in a better manner.
Enter Stage 2, where we talk about the optimization and automation.
Moved all over ec2 instances to private and public subnets.. Chef for configuration and deployments, and Jenkins for one click deployment.AWS SDK AMI and Autoscaling is very much important when the business and users are growing. To handle the single point of failures we had to decouple the FE and BE. Even a single ask of reducing query payload helped us in through put of the application Every new feature to micro services
AWS Singapore till Dec 2017,
Migrated to AWS Mumbai around Dec 2017
After our migration, we were all set for our next big leap.
Most exciting part of our journey till now as we encountered problems that are good to have.
Who wouldn’t want to have problems due to growth, scale and when pushed to the wall to control your costs?
Tools connecting to MongoDB. Need for OLAP. Need for ETL Pipelines that be run on EMR Clusters and push results to OLAP databases.
ETL pipelines scheduling using Airflow. Managing of cron jobs using airflow.
Extremely faster data retrieval use cases using Redis
AWS WAF implementation.
Need for multiple databases. One size doesn’t fit all. AWS DynamoDB, Cassandra, Redshift, MySQL, Postgres, Couchbase.
Icinga for infrastructure monitoring. Till 20-30 instances is manageable. Beyond, need a tool for auto alerting.
Need for better understanding the data flow within our systems.
We built our ELK cluster for logs analysis. Cloudwatch gotchas.
Carve our existing service into micro-service. Example: order microservice, procurement, warehousing.
Forecast your RI. Go for the right kind 1 year or 3 years, standard or convertible, payment options. Don’t act in a haste and repent at leisure.
Very important to optimize your systems.
Create alarms, keep checking the cost explorer. SMSBomber example.
By default files to S3 are pushed through the internet.
Great journey so far. Looking forward for more challenges.
Service 1 is bombarding too many requests to Service 2 which is a downstream service. What kind of systems will help you in absorbing all the requests?
Which service or page on the AWS Console will give you good visibility about your AWS spends?
What’s the example that we used during our talk to emphasize on the need for invoice anomalies?