Podman Containers - Basics

If you are getting into software development, especially fields like Machine Learning, you’ve likely heard of "containers." Usually, that conversation starts and ends with Docker.

But there is another powerful player in town: Podman

Why Podman instead of Docker?

Docker relies on a "daemon", a background program that runs with "root" (administrator) privileges on your computer to manage everything.

Podman is different. It is daemonless and rootless.

  • No Daemon: You only run Podman when you need it; there isn't a heavy background process running all the time.
  • Rootless: You can run containers as a normal user without needing administrator passwords. This is generally safer and easier to set up on shared systems.

Let's walk through building a simple Machine Learning container using Podman.

Project Structure

The project structure will contain a following files and setup. We will walk through each file to discuss them

 $ tree 
/podman-ml
├── app.py                    # simple flask application
├── model.py                  # simple linear model
├── requirements.txt        # python requirements
└── Containerfile            # The instructions for Podman    

Let's go through the code

1. model.py

Let's first begin with the model which is the simplest model there is. Contained on this model is simply a function that takes in an input and returns a regressiom output. Mathematically, this is:

$$ y = 23.34 + .02 * x $$

That's it.

## Implementing a Simple Model

alpha = 23.34
beta = .02

def predict_value(x_input: float) -> float:
    """ Return the prediction """
    return alpha + beta * x_input

2. app.py

The second is a simple flask app. This app has two views/routes: index and predict. We are not doing anything fancy here but simply building a small app that will allow us to send inputs and receive the prediction.

The implementation looks like the following:


from flask import Flask, request, jsonify
from model import predict_value

app = Flask(__name__)

@app.route("/")
def index():
    return "Hello! Your App is Working"

@app.route("/predict", methods=["GET"])
def predict():
    x_input = float( request.args.get("input", 20) )
    prediction = predict_value(x_input)
    return jsonify({"x_input": x_input, "predicted_value": prediction })

if __name__ == "__main__":
    app.run(host='0.0.0.0', port=9000)

3. requirements.txt

Next is the requirements.txt file which in this case only has Flask==3.1.2 as a requirement.


Flask==3.1.2

4. Dockerfile

Now we come to the most important part of containerization, which is where we build the image.

There are a number of items here so let me show the code and create a table to explain the items

# Start with python 3.9
FROM python:3.9-slim

# Set the working directory
WORKDIR /app

# Copy requirements and install
COPY requirements.txt .
RUN pip install -r requirements.txt


# Copy the rest of the code
COPY . .

# Expose the port Flask runs on
EXPOSE 9000

# command to run the app
CMD ["python", "app.py"]
Command Explanation
FROM Defines the Base Image. It’s the foundation (like an OS or a specific language version) that you build your project on top of.
WORKDIR Sets the active directory inside the container. Any following commands (like COPY or RUN) will happen inside this folder.
COPY Moves files from your computer into the container's file system. The first . is the source; the second . is the destination.
RUN Executes a command during the build phase. It is used to install software, libraries (like Flask), or set up configurations.
EXPOSE Acts as documentation. It tells the user which port the application inside the container is designed to listen on.
CMD The default command that executes only when the container starts. There can only be one CMD per file.

Build the Image

Now that we have a container, we can build the image. This will effectively create an image can contains everything we need to run the application. We use the following commands to achieve this.

$ podman build -t simple_prediction .
OUTPUT STEP 1/7: FROM python:3.9-slim STEP 2/7: WORKDIR /app --> 47965fb9f406 STEP 3/7: COPY requirements.txt . --> 27c49f0ff175 STEP 4/7: RUN pip install -r requirements.txt Collecting Flask==3.1.2 Downloading flask-3.1.2-py3-none-any.whl (103 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 103.3/103.3 kB 1.6 MB/s eta 0:00:00 Collecting importlib-metadata>=3.6.0 Downloading importlib_metadata-8.7.1-py3-none-any.whl (27 kB) Collecting jinja2>=3.1.2 Downloading jinja2-3.1.6-py3-none-any.whl (134 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 134.9/134.9 kB 4.8 MB/s eta 0:00:00 Collecting itsdangerous>=2.2.0 Downloading itsdangerous-2.2.0-py3-none-any.whl (16 kB) Collecting blinker>=1.9.0 Downloading blinker-1.9.0-py3-none-any.whl (8.5 kB) Collecting markupsafe>=2.1.1 Downloading markupsafe-3.0.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl (21 kB) Collecting werkzeug>=3.1.0 Downloading werkzeug-3.1.5-py3-none-any.whl (225 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 225.0/225.0 kB 49.9 MB/s eta 0:00:00 Collecting click>=8.1.3 Downloading click-8.1.8-py3-none-any.whl (98 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.2/98.2 kB 5.5 MB/s eta 0:00:00 Collecting zipp>=3.20 Downloading zipp-3.23.0-py3-none-any.whl (10 kB) Installing collected packages: zipp, markupsafe, itsdangerous, click, blinker, werkzeug, jinja2, importlib-metadata, Flask Successfully installed Flask-3.1.2 blinker-1.9.0 click-8.1.8 importlib-metadata-8.7.1 itsdangerous-2.2.0 jinja2-3.1.6 markupsafe-3.0.3 werkzeug-3.1.5 zipp-3.23.0 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [notice] A new release of pip is available: 23.0.1 -> 25.3 [notice] To update, run: pip install --upgrade pip --> 49ad5ac54e71 STEP 5/7: COPY . . --> 8e1e36776171 STEP 6/7: EXPOSE 9000 --> 273b5177660a STEP 7/7: CMD ["python", "app.py"] COMMIT simple_prediction --> 741066d19b30 Successfully tagged localhost/simple_prediction:latest 741066d19b3094353166d34f5fe01da3b34a5e74f47774525becb7159fefc2a5

Now let's break the commands down to see what is happening. This is perhaps obvious if you have reviewed the output

Command/Flag Explanation
podman build Tells Podman to create a new image.
-t simple_prediction "Tags" (names) the resulting image salary-app so it is easy to find later.
. Crucial! This tells Podman to look in the current directory for the Containerfile and the code.

Run the Container

Now that we have the image, we can run a live instance of the container. We are porting the Flask app 9000 port to the localhost:9000.

$ podman run -d -p 9000:9000 --name simple_prediction_app simple_prediction
OUTPUT0a914b80ffc0bf91dd6e058c6cae7efdee3688bf0e4fd6402b03a96abcf31e2b

Let's now detail the commands.

Flag/Argument Explanation
-d Detached mode. Runs the container in the background so it doesn't lock up your terminal window.
-p 9000:9000 Port Mapping. This is vital. It connects port 9000 on your laptop (localhost) to port 9000 inside the container where Flask is listening.
--name simple_prediction Gives the running container a specific, easy-to-read name.
simple_prediction The name of the image we built in Step 2.

Verify Application is Running

There are a couple of ways to verify that your application is running. The first and most obvious is to go to to http://127.0.0.1:9000 on a webbrowser.

Using Curl

Alternatively you can use the curl command to issue a get request. This will return content of the page

$ curl 'http://127.0.0.1:9000/predict'
OUTPUTHello! Your App is Working

Curl with predict API

We can also check the predict route which by default, if we do not pass an input x_input, it will set the input to 20.0.

$ curl 'http://127.0.0.1:9000/predict'
OUTPUT{"predicted_value":23.74,"x_input":20.0}

Checking Logs

Checking logs is an important features. Sometimes your application may fail for one reason or another. The log will provide details of any of the failures. To access the logs, you can simply use the following command. Notice that you must pass the name of the pod along the podman logs command.

$ podman logs simple_prediction_app
OUTPUT * Serving Flask app 'app' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:9000 * Running on http://10.88.0.11:9000 Press CTRL+C to quit

Cleaning Up

Finally, let's clean up by stopping the instance of the pod and deleting it.

$ podman stop simple_prediction_app
OUTPUTsimple_prediction_app
podman rm simple_prediction_app
OUTPUTsimple_prediction_app

This concludes the basic use of Podman container for deployment.