Transitioning from Planetscale to Self-Hosted Postgres with Coolify and HTTP Proxy

Saturday, 20 April 2024

With the recent sunsetting of Planetscales hobby tier and not wanting to pay $39 a month for a database which hasn’t got a tonne of data (and probably never will) I was looking into new solutions of how I can add a different database solution to this site (built with Astro). I mention the site being built by Astro here because around the time of writing Astro DB was announced and very easily could have been the solution I went with. Yes, they have a generous free tier and have made a point that it will be free forever, but things change. So, I had a choice of Astro DB, which I considered the “easy” solution, but I am not one to take the easy route. Also, around that time Vercel was mentioning some price hikes, which doesn’t affect this site, BUT in my social media feeds I was seeing more and more weight behind the idea of self hosting. Now, this appeals to me. I have no clue about devops and that side of things, why not take this opportunity to cause myself some pain and try and get something up and running, and that way I will be in full control (potentially to my own dismay), learn some new and exciting things and feel more encouraged to just “throw” something up without getting an exceedingly high bill from one of the cloud providers.

With a slightly clearer idea of which direction I want to follow I set out on how do I actually do this. Then I discovered a pretty cool project by Andras Bacsai called Coolify. What is Coolify exactly? As the website states it is: “An open-source & self-hostable Heroku / Netlify / Vercel alternative” which is instantly cool, but that doesn’t even scratch the surface, it can do much much more. Now, how do I get this thing up and running? The answer is a VPS, I’m not going to go into a tonne of details of what a VPS is but you basically pay a monthly fee for resources on a server with multiple users, which is super cheap and smaller scale than renting an entire server. I didn’t spend a lot of time searching for the perfect solution, I just followed the guide on Coolify which recommended Hetzner.

After following the guide to get this set up and also checking out this blogpost I got Coolify up and running, woo!

Migrating from Planetscale to Postgres

Why move to Postgres I hear you ask… Well as I mentioned earlier I like causing myself pain and frustration. There is a half truth there, but mainly I wanted to experiment more and fill gaps in my knowledge (there is a lot there around databases) and we use Postgres quite a lot at Logic and Rhythm when creating a project from scratch for a client. But how did I create the database? This is also another reason why Coolify is amazing, Coolify has a tonne of one click resources which you can get up and running crazy quickly and they also support databases! In this case there is a one click solution for getting a Postgres db up and running, wonderful! Now looms the question, but your database is MySQL how did you convert it to Postgres and migrate over all of your data? This is where pgloader comes in. Pgloader is a really cool project, I really recommend you checking out their homepage and documentation. If you don’t already, install pgloader so we can use it in the command line:

brew pgloader

To move everything over I created a config.load file with the following:

load database
    from <planetscale-connection-string>?sslmode=require
    into <postgres-connection-string>
    WITH include drop, create tables, create indexes, reset sequences, foreign keys
    CAST column resource.content to text drop default
    alter schema 'adamcollier' rename to 'public'
;

In my case I had to cast a field in the resource table, but you shouldn’t need to do this. Now all I needed to run is:

pgloader ./config.load

and with that my data was migrated from Planetscale to my new Postgres table!

The problem with Postgres and Vercel edge functions

When I had migrated all of my data over I thought that it was job done, I can just connect to my new Postgres database and start pulling in that sweet sweet data. However, I deploy this site with Vercel and take advantage of edge functions. Why edge functions? My reason for doing so is because as a site that doesn’t get a lot of traffic serverless functions cold starts were painfully slow. So I opted for edge functions, with one significant con… they don’t support the node runtime. So when I was trying to use a postgres client package to connect and grab data from the database, it failed miserably. The solution? Similar to how Planetscale handles it, a HTTP proxy.

Even though edge functions don’t support the node runtime they can still make HTTP requests, so we create a service (in our case a fastify app) between the app and the database, the app sends a HTTP request to the HTTP proxy, which grabs the data from the database for us and sends it back to the app. Damn, I hope that makes sense.

Adjusting Drizzle ORM

It’s worth noting here that I am using Drizzle ORM, which was previously set up for Planetscale, we need to adjust the setup to accomodate our HTTP proxy. I had no clue this was a thing in Drizzle, so it was pretty sweet when the team at Drizzle pointed me in the right direction, tucked away at the bottom of the postgres page it shows you the basics on how to get the HTTP proxy driver set up. Here is what I currently have:

import * as schema from "./schema";
import { drizzle } from "drizzle-orm/pg-proxy";

export const db = drizzle(
  async (sql, params, method) => {
    try {
      const response = await fetch(`${import.meta.env.HTTP_PROXY_URL}/query`, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
        },
        body: JSON.stringify({
          sql,
          params,
          method,
        }),
      });

      const rows = await response.json();

      return { rows: rows };
    } catch (e: any) {
      console.error("Error from pg proxy server: ", e.response.data);
      return { rows: [] };
    }
  },
  { schema }
);

import.meta.env.HTTP_PROXY_URL here will be a link to our up and running fastify proxy server.

Super simple right?? We don’t need to change anything else, everything just flows through this.

Creating the HTTP proxy app with Fastify

A word of warning here, I have never used Fastify before so apologies if I am doing anything insanely dumb (please reach out if I am and I can fix for anyone else that reads this post). But here is what our fastify project structure looks like, super simple but maybe note the use of Typescript here:

.
├── index.ts
├── package.json
├── pnpm-lock.yaml
└── tsconfig.json

the package.json:

{
  "name": "fastify-proxy",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "build": "tsc -p tsconfig.json",
    "dev": "tsx watch index.ts",
    "start": "tsx index.ts",
    "config": "tsc --init"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@fastify/postgres": "^5.2.2",
    "dotenv": "^16.4.5",
    "fastify": "^4.26.2",
    "pg": "^8.11.5",
    "tsx": "^4.7.1"
  },
  "devDependencies": {
    "@types/node": "^20.12.3",
    "@types/pg": "^8.11.4",
    "pino-pretty": "^11.0.0",
    "typescript": "^5.4.3"
  }
}

and our main index.ts file:

import fastify, { FastifyRequest } from "fastify";
import { fastifyPostgres } from "@fastify/postgres";
import dotenv from "dotenv";
dotenv.config();

const server = fastify({
  logger: {
    transport: {
      target: "pino-pretty",
    },
  },
});

server.register(fastifyPostgres, {
  connectionString: process.env.DATABASE_URL,
});

interface QueryBody {
  sql: string;
  params: any[];
  method: string;
}

server.post(
  "/query",
  async (request: FastifyRequest<{ Body: QueryBody }>, reply) => {
    try {
      const { sql, params, method } = request.body;

      // prevent multiple queries
      const sqlBody = sql?.replace(/;/g, "");

      if (method === "all") {
        try {
          const result = await server.pg.query({
            text: sqlBody,
            values: params,
            rowMode: "array",
          });

          return result.rows;
        } catch (e) {
          reply.status(500).send({ error: e });
        }
      } else if (method === "execute") {
        try {
          const result = await server.pg.query({
            text: sqlBody,
            values: params,
          });

          return result.rows;
        } catch (e) {
          reply.status(500).send({ error: e });
        }
      } else {
        reply.status(500).send({ error: "Unknown method value" });
      }
    } catch (e) {
      console.log("error from /query", e);
      reply.status(500).send({ error: e });
    }
  }
);

server.listen({ port: 8080, host: "0.0.0.0" }, (err, address) => {
  if (err) {
    console.error(err);
    process.exit(1);
  }
  console.log(`Server listening at ${address}`);
});

Again, not a lot of code here, but pretty powerful stuff. Essentially, any queries we want to make from our Astro app will hit the /query endpoint of the HTTP proxy, we will connect to the database, query the database and return the queried data. If there is an error we will handle the error and send it back with an appropriate status.

Deploying our HTTP proxy

Again Coolify makes this suuuuper simple. All we need to do is:

  • Create a new Github repo
  • Push up our HTTP proxy code
  • Head into the Coolify dashboard > resources
  • Add a new Github resource (you may need to set this up first, check out this guide)
  • Select the repo you want to deploy
  • Deploy the app, Coolify will handle building everything for you
  • Grab the url and add that to your .env for the HTTP proxy!

Now your HTTP proxy is up and running and available on the web! Additionally, if you ever decide to make some improvements (which inevitably you will as this is a simple example) you can just push up your changes to your main branch and Coolify will automatically deploy the new version for you!

Conclusion

It’s been a while since I’ve written something on this site and it felt incredibly refreshing doing so after so long. Hopefully I can find more time for doing this, and in turn record more of my learnings a long the way, I’m always one to stay curious after all. If you did find this post useful it would be great to know, or if you have some extra goodies to share, you can find me on Twitter (I will always call it twitter)