by Dr Freddy Wordingham
Cloud Ops
7. Serving our application
In the least lesson we built a frontend application.
Now we want to look at how we can partner this TypeScript code with Python - our workhorse language. This chapter will walk you through serving our app with FastAPI through Uvicorn webserver.
🌐 What is a Webserver?
A web server is software that serves web pages, or other data, in response to HTTP requests.
Sometimes these requests come directly from a browser (like when you visit google.com), but they can also be made programmatically from Python, JavaScript or pretty much any language! Essentially, they allow us to trigger code to run on another machine, saving our local machine from doing the heavy lifting and allowing it to focus on things like the interface and presentation.
📦 Update our Dependancies
We're going to need a couple of new dependancies, so let's update our requirements.txt file with the following:
fastapi
uvicorn
mangum
And then update our installation by running:
pip install -r requirements.txt
FastAPI is a modern, high-performance web framework for building APIs with Python, based on standard Python type hints.
Uvicorn is an ASGI server that serves as the interface between FastAPI and the outside world. We won't need it when we're in production, but it's useful for testings.
Mangum is used to deploy the FastAPI app serverlessly. It'll route the Lambda reqeust into the appropriate FastAPI route.
ℹ️ ASGI (Asynchronous Server Gateway Interface) is a specification between web servers and Python web applications. It serves as an evolution of the WSGI standard, adding support for WebSockets and other asynchronous protocols. This allows us developers to build more scalable, real-time web applications.
⏱ FastAPI
Now, let's create a new file called app.py in the root directory:
touch app.py
⚠️ We're creating this new file the root of our project, NOT the scripts directory.
This file will define our web-server.
Here's what it looks like:
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from mangum import Mangum
import os
# Instantiate the app
app = FastAPI()
# Server our react application at the root
app.mount("/", StaticFiles(directory=os.path.join("frontend",
"build"), html=True), name="build")
# CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Permits requests from all origins.
# Allows cookies and credentials to be included in the request.
allow_credentials=True,
allow_methods=["*"], # Allows all HTTP methods.
allow_headers=["*"] # Allows all headers.
)
# Define the Lambda handler
handler = Mangum(app)
# Prevent Lambda showing errors in CloudWatch by handling warmup requests correctly
def lambda_handler(event, context):
if "source" in event and event["source"] == "aws.events":
print("This is a warm-ip invocation")
return {}
else:
return handler(event, context)
🖇 Dependancies
From the top:
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from mangum import Mangum
import os
Here we're just importing the dependancies we need.
👨💻 App
# Instantiate the app
app = FastAPI()
# Server our react application at the root
app.mount("/", StaticFiles(directory=os.path.join("frontend",
"build"), html=True), name="build")
We're instantiating our FastAPI application, and then we're going to mount the frontend React application we built in the prevous chapter at "/".
ℹ️ "/" is the root of our site. In other words, when someone visits ourwebsite.com, this is what's going to be shown as the homepage.
🍪 Cross-Origin Resource Sharing
# CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Permits requests from all origins.
allow_credentials=True, # Allows cookies and credentials to be included in the request.
allow_methods=["*"], # Allows all HTTP methods.
allow_headers=["*"] # Allows all headers.
)
CORS stands for Cross-Origin Resource Sharing. It's a security feature implemented by web browsers to control requests made to different origins from web pages. It prevents unauthorized domains from making requests to a server, offering a way to secure your resources. ensures that only safe and approved cross-origin requests are allowed.
When building our web application, CORS errors and warnings can be annoying as they interrupt the development workflow, forcing us to debug issues related to cross-origin policy. These errors may even appear when we're just trying to fetch data from our own local API.
However, it's important to note that CORS serves a valuable purpose in securing the application. A lax CORS policy can expose the system to security vulnerabilities like Cross-Site Request Forgery (CSRF) or data breaches. Hence, while configuring CORS, especially in a production environment, it's crucial to be restrictive and explicit about what kinds of cross-origin requests the server should allow.
⚠️ In a production environment, we should tighten up the CORS settings for enhanced security. Instead of using wildcard "*" for allow_origins, you should specify the exact origins that are allowed to access your resources. Setting allow_credentials to True should also be reconsidered, especially if you don't need to share credentials like cookies between origins. Use allow_methods and allow_headers to list only the HTTP methods and headers that your application actually uses (we'l only use GET and POST). Doing so minimises the attack surface and ensures that only authorized actions can be performed.
For example, you could use:
app.add_middleware(
CORSMiddleware,
allow_origins=["https://youdomain.com", "https://www.yourdomain.com"],
allow_credentials=False,
allow_methods=["GET", "POST"],
allow_headers=["Authorization", "Content-Type"]
)
# Define the Lambda handler
handler = Mangum(app)
This uses Mangum to adapt an ASGI application (like one created with FastAPI) for use with AWS Lambda.
# Prevent Lambda showing errors in CloudWatch by handling warmup requests correctly
def lambda_handler(event, context):
if "source" in event and event["source"] == "aws.events":
print("This is a warm-ip invocation")
return {}
else:
return handler(event, context)
When a Lambda function is invoked after being idle for some time, AWS will need to start a new instance of that function. This involves loading the code, initialising the runtime, and executing any startup tasks (like loading a model). This process, known as a cold start, adds latency to the function's response time.
In summary, this script efficiently distinguishes between warm-up invocations and actual requests, handling each appropriately. It prevents unnecessary error logs in CloudWatch for warm-up events and ensures that only real requests are processed by your application logic.
🚀 Run
⚠️ Reminder, you need to "build" the frontend part of our application first with:
cd frontend
npm install
npm run build
cd -
We can start your webserver using:
python -m uvicorn main:app --port 8000 --reload
After which you can vist http://127.0.0.1:8000, and see your application:
This is exciting because everything, except the port number is not 8000 instead of 3000. Which means we've successfully served our frontend app from a backend!
Next time we'll look at sending a query from our frontend running on TypeScript, to be answered by our backend running on Python.
📑 APPENDIX
🐓 How to Run
🧱 Build Frontend
Navigate to the frontend/
directory:
cd frontend
Install any missing frontend dependancies:
npm install
Build the files for distributing the frontend to clients:
npm run build
🖲 Run the Backend
Go back to the project root directory:
cd ..
Activate the virtual environment, if you haven't already:
source .venv/bin/activate
Install any missing packages:
pip install -r requirements.txt
If you haven't already, train a CNN:
python scripts/train.py
Continue training an existing model:
python scripts/continue_training.py
Serve the web app:
python -m uvicorn main:app --port 8000 --reload
🗂️ Updated Files
Project structure
.
├── .venv/
├── .gitignore
├── resources
│ └── dog.jpg
├── frontend
│ ├── build/
│ ├── node_modules/
│ ├── public/
│ ├── src
│ │ ├── App.css
│ │ ├── App.test.tsx
│ │ ├── App.tsx
│ │ ├── index.css
│ │ ├── index.tsx
│ │ ├── logo.svg
│ │ ├── react-app-env.d.ts
│ │ ├── reportWebVitals.ts
│ │ └── setupTests.ts
│ ├── .gitignore
│ ├── package-lock.json
│ ├── package.json
│ ├── README.md
│ └── tsconfig.json
├── output
│ ├── activations_conv2d/
│ ├── activations_conv2d_1/
│ ├── activations_conv2d_2/
│ ├── activations_dense/
│ ├── activations_dense_1/
│ ├── model.h5
│ ├── sample_images.png
│ └── training_history.png
├── scripts
│ ├── classify.py
│ ├── continue_training.py
│ └── train.py
├── main.py
├── README.md
└── requirements.txt
requirements.txt
tensorflow
matplotlib
fastapi
mangum
uvicorn
main.py
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.staticfiles import StaticFiles
from mangum import Mangum
import os
# Instantiate the app
app = FastAPI()
# Server our react application at the root
app.mount("/", StaticFiles(directory=os.path.join("frontend",
"build"), html=True), name="build")
# CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Permits requests from all origins.
# Allows cookies and credentials to be included in the request.
allow_credentials=True,
allow_methods=["*"], # Allows all HTTP methods.
allow_headers=["*"] # Allows all headers.
)
# Define the Lambda handler
handler = Mangum(app)
# Prevent Lambda showing errors in CloudWatch by handling warmup requests correctly
def lambda_handler(event, context):
if "source" in event and event["source"] == "aws.events":
print("This is a warm-ip invocation")
return {}
else:
return handler(event, context)