Podman Containers - Basics
If you are getting into software development, especially fields like Machine Learning, you’ve likely heard of "containers." Usually, that conversation starts and ends with Docker.
But there is another powerful player in town:
Why Podman instead of Docker?
Docker relies on a "daemon", a background program that runs with "root" (administrator) privileges on your computer to manage everything.
Podman is different. It is daemonless and rootless.
- No Daemon: You only run Podman when you need it; there isn't a heavy background process running all the time.
- Rootless: You can run containers as a normal user without needing administrator passwords. This is generally safer and easier to set up on shared systems.
Let's walk through building a simple Machine Learning container using Podman.
Project Structure
The project structure will contain a following files and setup. We will walk through each file to discuss them
$ tree
/podman-ml
├── app.py # simple flask application
├── model.py # simple linear model
├── requirements.txt # python requirements
└── Containerfile # The instructions for Podman
Let's go through the code
1. model.py
Let's first begin with the model which is the simplest model there is. Contained on this model is simply a function that takes in an input and returns a regressiom output. Mathematically, this is:
$$ y = 23.34 + .02 * x $$
That's it.
## Implementing a Simple Model
alpha = 23.34
beta = .02
def predict_value(x_input: float) -> float:
""" Return the prediction """
return alpha + beta * x_input
2. app.py
The second is a simple flask app. This app has two views/routes: index and predict. We are not doing anything fancy here but simply building a small app that will allow us to send inputs and receive the prediction.
The implementation looks like the following:
from flask import Flask, request, jsonify
from model import predict_value
app = Flask(__name__)
@app.route("/")
def index():
return "Hello! Your App is Working"
@app.route("/predict", methods=["GET"])
def predict():
x_input = float( request.args.get("input", 20) )
prediction = predict_value(x_input)
return jsonify({"x_input": x_input, "predicted_value": prediction })
if __name__ == "__main__":
app.run(host='0.0.0.0', port=9000)
3. requirements.txt
Next is the requirements.txt file which in this case only has
Flask==3.1.2
4. Dockerfile
Now we come to the most important part of containerization, which is where we build the image.
There are a number of items here so let me show the code and create a table to explain the items
# Start with python 3.9
FROM python:3.9-slim
# Set the working directory
WORKDIR /app
# Copy requirements and install
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy the rest of the code
COPY . .
# Expose the port Flask runs on
EXPOSE 9000
# command to run the app
CMD ["python", "app.py"]
| Command | Explanation |
|---|---|
| FROM | Defines the Base Image. It’s the foundation (like an OS or a specific language version) that you build your project on top of. |
| WORKDIR | Sets the active directory inside the container. Any following commands (like COPY or RUN) will happen inside this folder. |
| COPY | Moves files from your computer into the container's file system. The first . is the source; the second . is the destination. |
| RUN | Executes a command during the build phase. It is used to install software, libraries (like Flask), or set up configurations. |
| EXPOSE | Acts as documentation. It tells the user which port the application inside the container is designed to listen on. |
| CMD | The default command that executes only when the container starts. There can only be one CMD per file. |
Build the Image
Now that we have a container, we can build the image. This will effectively create an image can contains everything we need to run the application. We use the following commands to achieve this.
$ podman build -t simple_prediction .
Now let's break the commands down to see what is happening. This is perhaps obvious if you have reviewed the output
| Command/Flag | Explanation |
|---|---|
| podman build | Tells Podman to create a new image. |
| -t simple_prediction | "Tags" (names) the resulting image salary-app so it is easy to find later. |
| . | Crucial! This tells Podman to look in the current directory for the Containerfile and the code. |
Run the Container
Now that we have the image, we can run a live instance of the container. We are porting the Flask app 9000 port to the localhost:9000.
$ podman run -d -p 9000:9000 --name simple_prediction_app simple_prediction
Let's now detail the commands.
| Flag/Argument | Explanation |
|---|---|
| -d | Detached mode. Runs the container in the background so it doesn't lock up your terminal window. |
| -p 9000:9000 | Port Mapping. This is vital. It connects port 9000 on your laptop (localhost) to port 9000 inside the container where Flask is listening. |
| --name simple_prediction | Gives the running container a specific, easy-to-read name. |
| simple_prediction | The name of the image we built in Step 2. |
Verify Application is Running
There are a couple of ways to verify that your application is running. The first and most obvious is to go to to
Using Curl
Alternatively you can use the
$ curl 'http://127.0.0.1:9000/predict'
Curl with predict API
We can also check the predict route which by default, if we do not pass an input
$ curl 'http://127.0.0.1:9000/predict'
Checking Logs
Checking logs is an important features. Sometimes your application may fail for one reason or another. The log will
provide
details of any of the failures. To access the logs, you can simply use the following command. Notice that you must
pass
the name of the pod along the
$ podman logs simple_prediction_app
Cleaning Up
Finally, let's clean up by stopping the instance of the pod and deleting it.
$ podman stop simple_prediction_app
podman rm simple_prediction_app
This concludes the basic use of Podman container for deployment.