Before talking about what, how, and why–let me tell you some background story.
I completed developing the entire system of my portfolio website 3 days ago. One of the key features of my portfolio is an AI assistant Andy, who can assist the visitors to get quick and precise information about me, schedule a meeting with me, and even send me a message. I usually write backend code in Node.js + Typescript stack, but this time I had to deal with the AI system, and considering the future updates I want to make, I chose FastAPI + Python stack to write backend for Andy.
I completed writing the system and tested it well. Now, it's time for deployment before making the website public. Should I host it in AWS VPS, or should I host it as a lambda function?–these were the questions that were revolving around my mind. But I already had a spare cPanel server, which was underutilized. So, I decided not to waste money and host Andy's backend code in the cPanel server.
Hosting an AI's application backend on a cPanel server, huh?
Andy is built on top of OpenAI's ChatGPT-4o-mini model, featuring some customized layers of protection and interactivity with system data. As a result, it is not too heavy for a cPanel server to manage. Additionally, the entire backend is developed with FastAPI, ensuring it is fully asynchronous.
The whole system's asynchronous design posed a challenge during hosting. Since cPanel only supports WSGI-based Python applications, my code needed to run on ASGI (Asynchronous Server Gateway Interface). I was unaware of this requirement until I tried to host it.
Now, I face two options: either use other servers for hosting or rewrite the application in NodeJS + TypeScript, as running it in WSGI would render it unusable. I did rewrite most of it in TS in some 10-20 minutes, but I realized it would not be feasible for future development of Andy, and I didn't want to retest every condition and parameter again.
Therefore, I chose the third option and decided to experiment with the server.
If you have access to the terminal and know how to use the terminal and prompt engineer an AI, every server is your local machine.
So, I opened the Terminal of the server and installed conda on the server using the following commands.
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
This will download the installation file.
chmod +x Miniconda3-latest-Linux-x86_64.sh
This will make the file executable.
./Miniconda3-latest-Linux-x86_64.sh
This will run the installer. You will have to accept the license agreements.
After installation, initialize conda using this command.
~/miniconda3/bin/conda init
This will set up your shell environment to use Conda.
You must restart the terminal to apply the changes or run.
source ~/.bashrc
This will set up your shell environment to use Conda.
I can't directly access the port because of firewall protection, but based on what I know, cPanel likely uses the Apache server to handle reverse proxying. This allows services running on specific ports to be accessed through another open port.
Some great friends of mine, ChatGPT and Claude, suggested I should use the .htaccess file to write the proxy code. However, the main problem of doing so was that I needed to restart the Apache server to make it work. I didn't know how many times I would need to change the code to make it work. I would do it, but I had no access to restart the Apache server, neither through the terminal nor through a web interface. So, I would have to contact the service provider just to restart the server, and I didn't know how many times I had to do it. This solution seems unfeasible for me.
I asked both ChatGPT and Claude for an alternative solution to this reverse proxy issue, but they kept suggesting Nginx. I already knew that wouldn’t work in my case. So, it was time for me to put my brain to work.
After relaxing with some music, my nerves started acting. I came up with a great solution: using a Node.js server to act as a reverse proxy—or, as I like to call it, a bridge.
I wrote the following code using express
and http-proxy-middleware
.
const express = require("express");
const { createProxyMiddleware } = require("http-proxy-middleware");
const app = express();
const PORT = 3000;
const TARGET = "http://localhost:8080";
app.use("/", createProxyMiddleware({ target: TARGET, changeOrigin: true }));
app.listen(PORT, () => {
console.log(`Proxy server running at http://localhost:${PORT}`);
});
After uploading this code to cPanel, I spun up a Node.js server for the desired domain address. Now, I can successfully access Andy’s backend service through the proxy. It worked like a charm and had no effect on performance.
This way, I had successfully found my way around to run my FastAPI server in full asynchronous mode, with no effect on performance.
Sometimes, breaking tradition and thinking outside the box is the key to achieving greater outcomes.
If you want to try Andy, click the Chat with Andy button floating at top of your screen.