Alright so we’ve got the Import and Export functions finished! Now we need a way to send our data to the OpenAI API, but there is one more step we need to do first. We need to create a proxy to avoid direct app to app connections.
When working with client-side scripts and APIs there is a security measure called CORS, or Cross-Origin Resource Sharing. This measure prevents direct connections between some services. To connect Google App Script and OpenAI we need to run it through a proxy server. The proxy server can make requests to any domain, circumventing the CORS restriction.
Setting up the project
This Proxy Server can be done in many ways, I’ve chosen glitch as an example for simplicity and accessibility but there are many other options which I will list below this example if you wanted to customize yours for scale or your own server.
Step 1: Create a new Glitch Project
First, go to Glitch and sign up for an account if you don’t have one. Click on the ‘New Project’ button and select ‘hello-express’ to start a new Node.js project with Express.
Step 2: Delete the existing code
Delete everything in the server.js
file. This file is where we’ll write the server-side JavaScript code that will run on our Express.js server.
Understanding the package.json file
Before we dive into the code, let’s look at package.json
. This file is used by Node.js to manage the project’s dependencies (other packages your project uses), scripts, version, and other metadata.
In your package.json
, you should have:
{
"name": "glitch-hello-express",
"version": "0.0.1",
"description": "A simple Node app built on Express, instantly up and running.",
"main": "server.js",
"scripts": {
"start": "node server.js"
},
"dependencies": {
"express": "^4.17.1",
"axios": "^0.23.0"
},
"engines": {
"node": "14.x"
},
"repository": {
"url": "https://glitch.com/edit/#!/hello-express"
},
"license": "MIT",
"keywords": [
"node",
"glitch",
"express"
]
}
name
: The name of your application.version
: The current version of your application.description
: A short description of your application.main
: The entry point to your application. Here, it isserver.js
.scripts
: This object can contain various script commands that can be run from the command line. Thestart
command here will start your Express server.dependencies
: This lists all the packages your project depends on, with their versions.express
is the web server framework we’re using, andaxios
is a package for making HTTP requests.engines
: This specifies the version of Node.js this app works on.
Writing your server.js file
Next, we will implement the code for our Express.js server in the server.js
file. Here’s the full code you will be using:
const express = require('express');
const axios = require('axios');
const app = express();
app.use(express.json());
app.post('/', async (req, res) => {
const prompt = req.body.prompt;
const maxTokens = req.body.maxTokens || 500;
try {
const response = await axios.post('https://api.openai.com/v1/completions', {
model: "text-davinci-003",
prompt,
max_tokens: maxTokens,
}, {
headers: {
'Authorization': `Bearer ${process.env.OPENAI_KEY}`,
'Content-Type': 'application/json'
}
});
res.json(response
.data);
} catch (error) {
res.status(500).json({ message: 'Something went wrong.' });
}
});
app.listen(process.env.PORT, () => {
console.log("Your app is listening on port " + process.env.PORT);
});
Here’s what each section does:
- Import the required packages:
express
is the web server framework we’re using.axios
is a package for making HTTP requests.
- Set up an Express application: We create an Express application and configure it to parse incoming JSON payloads.
- Define a POST endpoint at ‘/’ for the app: When this endpoint receives a POST request, it will:
- Extract the
prompt
andmaxTokens
from the request’s body. - Make a POST request to OpenAI’s completions endpoint, sending the prompt and max tokens as parameters and the OpenAI key as a header.
- Respond with the OpenAI API’s response or a 500 error if something goes wrong.
- Extract the
- Start the server: This makes your app start listening for HTTP requests on the port defined in the
PORT
environment variable.
Configuring the .env file
Lastly, we’ll set up our .env
file. This is where we store our environment variables – values that are needed by our code and may change between different environments, like a local machine, a staging server, and a production server.
In our case, we’ll need to define two environment variables:
OPENAI_KEY
: This should be set to your secret OpenAI key.
Do not share the .env
file or publish it online. It should be kept secret.
Testing your application
Now your Glitch app is ready to go. To test it, you can send a POST request to the URL of your Glitch app (which you can find by clicking the ‘Show’ button in Glitch). The body of your request should be a JSON object with a prompt
field, and optionally, a maxTokens
field. Your Glitch app will forward the request to OpenAI, get the response, and send it back to you.
Here are a few alternatives to using Glitch for the proxy that might be better for customization or scale:
- Cloudflare Workers: This is a serverless computing platform by Cloudflare that allows you to deploy code to Cloudflare’s edge network. It provides functionalities similar to a reverse proxy and can be a good alternative to Glitch.
- NGINX: NGINX is a popular web server that can also be used as a reverse proxy, load balancer, and HTTP cache. It’s a powerful, flexible tool, and can be configured to handle a variety of networking tasks.
- Apache HTTP Server with mod_proxy: Apache is another widely-used web server that, when combined with the mod_proxy module, can function as a proxy server. Mod_proxy is a powerful, flexible module that supports forwarding and reverse proxying.
- HAProxy: HAProxy is a free, open-source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications. It’s widely known for its performance and stability.
- Squid: Squid is a caching and forwarding HTTP web proxy. It has a wide variety of uses, from speeding up a web server by caching repeated requests, to caching web, DNS and other computer network lookups for a group of people sharing network resources.
- AWS Lambda with Amazon API Gateway: AWS Lambda is a serverless computing service that lets you run your code without provisioning or managing servers. When combined with Amazon API Gateway to handle HTTP requests and responses, it can function similarly to a proxy server.
In the next part we will create a function to send each row of our spreadsheet data as a request to Open AI API. When we get a response we will be able to connect all of the functions into one Elite Sheet that does it all for us!
See you soon!