The Architechture behind highstreet.ly

 I recently read a blog post (https://anthonynsimon.com/blog/one-man-saas-architecture/) by a guy called Anthony Simon, discussing the architecture behind his one man startup. I found it encouraging to see someone elses setup and how it's been architected.

I've worked for a lot of larger (and therefor big budget) clients who utilise cloud, on-prem or hybrid aproaches. Some of these clients spend £10,000 a week JUST on logging infrastructure! Clearly that kind of budget is way out of reach for a startup, and so when you're a self funded startup you need to look at ways of running things the best you know how- at budget! And so this is why Anthny's post struck a cord with me. We all know how to build highly scalable systems with budget, but when it's just  you, trying to figure it all out without the huge infrastructure, budget, teams etc - it's a little isolating! 

And so, because I found Anthony's post so useful, I thought I should share what we have @ highstreet.ly.

First up a little background. 

Our small team is comprised of 4:

  • Myself - I do all of the infrastructure, hosting, backend coding and 50% of the front end
  • Chris - Front end
  • Michael - marketing and business 
  • Jon - Design

We currently run it from our homes - 2 of us in the south of England, 1 in the Midlands and 1 in Mexico!

We started off as a ticketing platform, and before that a native app for festivals. But we never really got too far with that and things changed last year... 

Highstreet.ly was a pandemic baby. We were aware of the struggle that lots of highstreet shops were managing pre-covid - but during covid things clearly got a lot worse. We knew that alot of bricks and mortar businesses were struggling to compete with other services offering either similar products with fast delivery and competetive pricing or utilising the highstreet businesses and wrapping them in a last-mile delivery solution and charging them HUGE fees for the pleasure. We wanted to give these businesses an option to compete where they aren't being cheated - we wanted to help empower the highstreet. 



And so highstreet.ly was born. We are a subscriptions based SaaS offering, providing tools that we hope will allow highstreet businesses to offer services they maybe wouldn't otherwise be able to at a cost which is far, far less than the other offerings. Our service has three main components:

  • Shop / widget (for self hosting)
  • Dashboard 
  • Operator app


In order to keep the costs down but still pack a punch we host our own servers in a datacenter in Brighton(FastNet). This cuts our hosting fees into a fraction of what they would be in the cloud. 

I'm going to echo what Anthony said in his post: Our architecture is the way it is because it's what suits us, it was developed over trial and error and reflects my own personal experience and expertise - your solution may / will be vastly different. There is no right or wrong way to archteour system - I'm certainly not able to advice on that. Our system is architected the way it is so that we can streamline our infrastructure, keep our costs down and focus on building the software that makes our product.

I also want to repeat what Anthony said is his post: You can tell by the image below that we use a huge amount of amazing open source software - we really do stand on the shoulders of giants. Making a SaaS product is truly a challenge but it would be pretty much impossible without the amazing work put in by the people behind these projects.

Here's a birds eye view of our architecture:





What we got:
  • We have 3 x DL380 servers hosted in the datacenter
  • We have 1 x SonicWall firewall 
  • The 3 servers each run Esxi (Open Source version) to host the nodes that make up our hosting
  • 1 of the VMs runs Postgres
  • 1 of the VMs runs RabbitMQ
  • We then have 12 VMs which make up our Kubernetes cluster
On all of this we run 3 environments (LIVE, TEST, DEV) - each in their own namespaces with their own databases and vhosts in RMQ.

To provision the cluster I use the amazing KubeSpray (https://github.com/kubernetes-sigs/kubespray

I've also added to that our own ansible project to install things after the cluster is initialised - stuff like CertManager etc. We used to host Postgres and RMQ inside the cluster using Rook / Ceph but I found the performance lacking and recovering from a failure was very involved.

Once the cluster is up and running we use Pulumi (https://www.pulumi.com/) to deploy our services. However, this is done via a github action - with a remote runner which lives in our cluster. So pushing our pulumi code with a git tag will trigger a build of our IaC project - which sends the build context to the runner living inside our cluster which then installs all of the services for the namespace being targetted. It sounds complicated but it's very, very easy once it's set up and it means I never have to SSH onto any servers to deploy anything.

We have 3 types of containers we need to build to make up our system:
  • DotNetCore API services
  • Ember / Svelte NGinx containers
  • Migrations jobs
The migrations jobs use FlyWay (https://flywaydb.org/) and work by convention - any new script in the relevant folder is picked up and the job runs. The dotnet core images are built using GitHub actions and stored in Docker Hub and the same for the Ember / Svelte images - except these are compiled in the action and the /release folder is copied into the NGinx image. 

We have 5 bounded contexts and 4 of them have a message receiver process so we have 9 services that run as microservices inside each of our cluster namespaces. We use the CQRS/ES pattern where applicable. 

For authentication we use IdentityServer and our microservices run in a zero trust architecture (https://www.paloaltonetworks.com/cyberpedia/what-is-a-zero-trust-architecture). Our UIs use OAuth and JWT tokens to authenticate against our servers.

Because we are using a microservice architecture we also use a BFF setup (https://tsh.io/blog/design-patterns-in-microservices-api-gateway-bff-and-more/#:~:text=BFF%20is%20essentially%20a%20variant,multiple%20gateways%20for%20each%20client.) - this allows us to present a specialised API for each of our clients which only presents what is necessary. It also allows us to do things like cahcing and rate limiting. For the BFF we use Ocelot: https://github.com/ThreeMammals/Ocelot

When we specify a service with an ingress (api's UI's etc) - cert-manager automatically provisions the TLS for it - this hangs off of the Pulumi infrastrusture where by Pulumi will create the DNS at Digital Ocean automatically for us based on the configuration of the environment being deployed. We use the cert-manager DNS01 challenge with DO to make this happen: https://cert-manager.io/docs/configuration/acme/dns01/digitalocean/

Putting all of this together, we have a fully automated CI/CD pipeline with versioning at every level - even the infrastructure can be rolled back with a single command. This alone is something I see "big budget" projects fail at all of the time. The code and wiring to acheieve this is simple, uses standard. open source tooling - and that matters because as we grow, new developers joining the team will be able to "get" all of this really quickly.




For local development we use Kind-Kubernetes https://kind.sigs.k8s.io/docs/user/quick-start/ - the good thing about this is that, to spin up a cluster locally just requires a new Pulumi stack definition. My local cluster uses exaclty the same code to initialise as the deployed, publicly visible environments.

For logging we use NewRelic - this has a log scraper installed inside our cluster which automatically ships our telemetrics to NR for analysis and alerts. 

Since we are a subscriptions based service, we currently use ChargeBee (https://www.chargebee.com/) to host our plans and add-ons. We have a webhook set up on our end which recieves events from ChargeBee and allows us to mirror the configuration and react to changes in user subscriptions.

The good thing about ChargeBee is that it enables us to set up the subscriptions infrastructure without needing to implement 100% of it - we can settle that tech debt when we need to. It also integrates with Stripe. 

For our shop widget we use Stripe to handle card payments. The good thing about Stripe - for us - is that we can route payments for products purchased to our B2C customers Stripe accounts and our platform fee to our Stripe account. We also plan to allow our B2C customers, customers (!) to subscribe to their products - and Stripe allows this in a very easy and fluid manner.

That's a kind of whirl wind tour of the systems behind highstreet.ly! Techies with a keen eye will see there's one or two elements missing from the picture above (CDN etc) - we're working towards these things and we will get there. It's all a part of the challenge! I hope it helps reassure someone else out there in the same way reading Anhony's post reassured me! 












Comments