Logos of Microsoft Azure and Python

Deploy a REST API Python to Microsoft Azure

It is good to build an application. It is great to deploy it and witness it running. Let’s see how to do that using Microsoft Azure.

Introduction

In the past months, I started my programming journey with Python, building a REST API using Flask and SQLAlchemy.

I had the opportunity at work to build another API and at one point, it was time to deploy the MVP to Microsoft Azure.

Below, you’ll find the detailed steps with no image because Microsoft moves things around often and a screenshot become obsolete quickly…

Naming of blades and tabs could change too, so be patient and look around ;)

For naming resources, you can use the official guide on the matter.

Key vault setup

Create The Key Vault

Via the search on Azure portal, create a Key Vault resource with:

  • Basic tab : its name that should be kv-[project name]-[env]
  • Access config tab : leave as default
  • Networking tab: leave as default

Access control Of The Key Vault

Under the Access Control (IAM), you will need to give the following Role permission:

  • add a role assignment Key Vault Administrator to be able to define the secrets.

Tip: add a member by typing the full email.

Configure The Key Vault

Once create, in the Objects blade, create each secret manually.

The key name should be kebab-case.

Container Registry setup

Create The Container Registry

Via the search bar, type “Container Registry” and select the resource type.

When setting it up, use Basic plan for lower pricing.

Note

The Azure DevOps pipeline will create the repository when the CI is in place.

Configure The Container Registry

Under the Access Control (IAM), you will need to give the follow Role permissions:

  • ACR Registry Catalog Lister
  • AcrPull
  • AcrPush

These permissions allow:

  • you to list the images in the registry when browsing them on the Azure Portal.
  • you to configure the Azure pipeline so you can tell the DevOps to push images the Container Registry
  • the Container App we’ll create later to pull the images

Azure Pipeline

Prerequisites

In Azure, you’ll need to add to your user account the Application Administrator Role in Azure AD or Microsoft Entra ID to allow the pipeline creation in DevOps.

If you aren’t an administrator, you may need to ask one to grant you this permission. It isn’t something you can do like Role-based permissions.

You’ll also need a DevOps where your project resides. The article doesn’t detail creating the DevOps and it supposes that you have one and that you have created a Git repository to store your application code.

Configure The Pipeline

Go to the DevOps and the Pipeline blade.

Then:

  • on the Connect tab, select Azure Repositories.
  • on the Select tab, select the target repository.
  • on the Configure tab, select registry container created above.

Once you confirmed the pipeline creation, add the tag latest in the configuration file generated so that you can select this tag on the Container App later.

Otherwise you’ll need to update the image to deploy to the Container App after each build.

Also, in the generated azure-pipelines.yml file, you make a few modifications to prepare the deployment of the Docker image to the Container App:

  • add a variable projectPath to define the project path:
1
2
3
4
5
6
7
8
variables:
  # Container registry service connection established during pipeline creation
  dockerRegistryServiceConnection: "e5979aa7-383a-4ddb-9aff-6e531f3d023a"
  imageRepository: "my-app"
  containerRegistry: "mycontainerregistry.azurecr.io"
  dockerfilePath: "$(Build.SourcesDirectory)/docker/Dockerfile"
  projectPath: "$(Build.SourcesDirectory)"
  tag: "$(Build.BuildId)"
  • and specifically define the buildContext to use projectPath:
1
2
3
4
5
6
inputs:
  command: buildAndPush
  repository: $(imageRepository)
  dockerfile: $(dockerfilePath)
  buildContext: $(projectPath)
  containerRegistry: $(dockerRegistryServiceConnection)

Why? If you have structured your project with a Dockerfile in a subfolder docker, you must perform the steps above to avoid an error on docker build that it can’t find the file requirements.txt (list of Python dependencies of your project).

This applies in the following folder structure:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
project
│   config.py
│   requirements.txt
│   run.py
├── app
│   └── some files ...
└── docker
│   └── Dockerfile
│   └── docker-compose.yml
└──

Storage Account and File Share setup

Create The Storage Account

Create the storage account in the same zone as all other resources.

There is no specific instructions to create the resource except for the naming that starts with st and doesn’t allow hyphens.

Go back to the guide quoted in the introduction.

Configure The Storage Account

Under Data Storage > File shares blade,

  • create a file share. In the app we’re building, we need to store the SQLLite database file. Enable a backup if you need.
  • create another file share to store logs. No need for backups.
  • create another file share for configuration files and other data files you need to be able to edit without updating the code.

I name my file shares the following way: fileshare-[designation]- [project]-[env] where [designation] is either db for database, logs or json for files I need to edit on the fly.

You’ll link the Container app environment and Container App to all file shares.

Container App setup

Prerequisite

You need:

  • A pipeline exists in Azure DevOps with an image ready.
  • You need a Contributor role on the Azure Subscription for your user account. Therefore, you might need to provide the create instructions to someone with that permission, if you can’t have them based on your organization policy.

Create The Container App

Via the search bar, type Container App and select the resource type.

On the Basics tab,

  • configure the Resource group, the name and the region
  • customize the Container Apps Environment (it’s what holds many Container App) by creating a new one.
    • click New
    • then, on the Basics tab, give a name, the convention in the introduction and leave the rest of the options as it is.
    • next, on the Monitoring tab, disable the logs. The Container App provides a console log stream that can help and your application should store file logs in the dedicated file share.
    • leave the Workload profiles and Networking tabs as they are.

On the Container tab,

  • select the image source to be Azure Container Registry (ACR).
  • select the ACR you created previously.
  • select the image.
  • select the image tag to latest.
  • adjust the CPU and Memory to your needs.
  • you could set up the environment variables now, but we’ll look at that in the Configure step. In fact, you may need to adjust then in the lifetime of your application.

On the Bindings tab, leave it blank.

On the Ingress tab, you will need to configure the TCP part so the REST API is accessible over HTTP:

  • check the Ingress checkbox
  • select Ingress Traffic to be Accepting traffic from anywhere
  • select HTTP for the Ingress type
  • leave the Client certificate mode as default (no option selected)
  • set the Target port to 5000, the default port of the Flask app we’re deploying.
  • confirm the creation on the Review + create tab.

Note: if the review fails with the following error, it probably means you don’t have the permission to create the resource:

The client ’youraccount@example.com” with object id “xxx” doesn’t have authorization to perform action “Microsoft.App/register/action” over scope “/subscriptions/yyy” or the scope is invalid. If access was recently granted, please refresh your credentials. (Code: AuthorizationFailed) (Code: AuthorizationFailed)

Configure the Container Apps Environment

Once Azure has created the Container App, go straight to the Container Apps Environment to link the file shares to it.

To do so, retrieve the name and access key under the Storage Account:

  • go to Security + networking and select the Access keys blade.
  • copy
    • one of the keys.
    • the Storage account name.
    • the file share names created earlier (found under the File shares blade).

Back on the Container Apps Environment:

  • go to Settings and the Azure Files blade.
  • decide of the name using this naming: azure-files-[designation] (designation would be db, logs, etc.)
  • add a new one and fill the field by pasting the values copied previously.
  • set the Access mode to Read/Write or Read only (I do that for the files I need the application not to write).
  • repeat for all the file shares you need the Container App to use.

What will we use that for? In the REST API, you may use a SQLLite database where the database is a file and you need to persist it.

The same goes for file logging.

You could write the files in the container image. But, as it restarts on a new deploy, you’ll lose the data…

Configure the Container App Deploy Settings

Next, to go the Container App to configure the deploy settings.

To do so, go to the Revisions and replicas blade under Application and click Creation new revision.

We’ll set up the Container tab last. You’ll understand why soon.

Go to the Scale tab and adjust the Min replicas and Max replicas to use based on a scale rule you define. I’ve not used any rule in my scenario so I’ll skip that. I simply set the min. and max. values to 1.

On the Volumes tab,

  • select Azure file volume as Volume type.
  • give the volume a name. For example, I’d name the databases volume databases. You’ll need one volume for each file share you created.

Important note: the volume name must match the name of the volume you need to create in the Dockerfile. The volume name corresponds to what follows the WORKDIR value under the VOLUME commands below:

1
2
3
4
5
6
# Set work directory of the docker image
WORKDIR /project-container

# Create mount points with absolute paths
VOLUME /project-container/databases
VOLUME /project-container/logs
  • select the target file share (which is really the Azure file value you created under the Container App Environment).
  • set the mount options to nobrl if the volume contains a SQLLite database file. Why? The problem is that Docker mounts the volume as CIFS file system which can’t deal with SQLite3 lock. See this answer and that answer in Stackoverflow. The Microsoft documentation also confirm this.
  • make sure to click the add button before continuing.

Go back to the Container tab to add the volumes based on the File shares you create. On our example so far, we need:

  • leave the Revision details section as it is.
  • Under the Container image section, click of the existing image.

It’ll open a right pane:

  • under the Basics tab, you find the details you specified at the creation of the Container App resource. This is where you can add your environment variables (scroll to the very bottom). We’ll come back to it when we will link the Key Vault to pull the secret values from it.
  • under the Health probes, leave as it is.
  • under the Volumes mounts tab, add all the volume mounts you need:
    • the Volume name will equal to the name you defined above.
    • the Mount path should be the same value as you defined in the Dockerfile as explained above.
    • leave the Sub path empty.
  • click Save
  • make sure to click Create

Under the Revisions and replicas blade, you should see within a couple of minutes if the deploy was successful when it displays the Running status as Running and a green check mark.

If not, click the revision link on the first column and click Console log stream. Somehow, the logs may not always appear so try a few times… Consistency in what the console log stream displays is random… On that end, I saw logs, and sometimes, I didn’t.

Linking the Key Vault and the Container App

This is a prerequisite to configuring the environment variables.

First, under the Identity blade in the Container App resource, enable the System assigned identity by toggling the status to On.

You need this so you can provide the identify of the Container App a role-based permission in the Key Vault IAM.

Then, go to Key Vault resource and browse to the Access control (IAM) blade.

Click Add and then Add role assignment.

  • under the Role tab, search for Key Vault Secrets User and select it.
  • under the Members tab,
    • select Managed identity.
    • click Select members.
    • on the right pane that opens,
      • select your subscription,
      • select the Managed identity: you should have a value called Container App (1) (1 is the number of Container App you have configured with a system identity).
      • select the target member list under Select. It should display a member with the name of your Container App.
      • make sure to click Select.
      • finish with Review + assign.

Configure the Container App Secrets Environment Variables

To start this section, you’ll need to copy the Vault URI found under the Overview blade of the Key Vault resource.

Also make note of the secret names you created.

Prepare a URL for each with the following format: {vault_uri}/secrets/{secret_name}

Back to the Container App, browse to the Settings and Secrets blade.

From there, add your secret from the Key Vault by clicking Add button. Then:

  • set the Key value to the secret name you’re adding with a prefix kv_.
  • set the Type to Key Vault reference.
  • set the Value to the corresponding URL you prepared.
  • click Add and wait that the secret does appear.

Attention

If the secret doesn’t appear, even if the Azure Portal gives a positive feedback in the notifications, the problem is that you didn’t complete properly “Linking the Key Vault and the Container App” step properly.

Configure the Environment Variables

Once you’re done, create a new revision from the Applications > Revisions and replicas blade.

Select your container image and scroll down to the Environment Variables section and:

  • first, add the variables that aren’t a secret using the Source as Manual entry.
  • second, add the variable whose values come from the Key vault using the Source as Reference a secret. Then select as the Value the corresponding secret reference.

Make sure to click Save and create the revision.

You’re done configuring! 🏆

Testing the API

Using Visual Studio Code and the REST Client extension, you can create a little file to test your endpoints:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
### Create an event
POST https://capp-myproject-prod.cravesea-7fd7d8d0b6.myregion.azurecontainerapps.io/event
Content-Type: application/json

{
  "id": 1234
}

### GET all events
GET https://capp-myproject-prod.cravesea-7fd7d8d0b6.myregion.azurecontainerapps.io/event/all
Content-Type: application/json

Troubleshooting

Can’t Call The REST API App Even If Deploy Is Successful

If you get the following message when calling your app while the deploy is successful and Azure tells you it’s running:

1
azure upstream connect error or disconnect/reset before headers. retried and the latest reset reason: remote connection failure, transport failure reason: delayed connect error: 111

Make sure you use a production-grade server, not the default Flask server.

To fix that, you need to:

  • install gunicorn package: it’s a production-grade webserver.

  • configure the Dockerfile with the following command

    1
    2
    
    # Runtime commmand to start server
    CMD ["gunicorn", "--bind", "0.0.0.0:5000", "run:app"]
    

To run the above command, you need a run.py file at the root that contains something like this:

1
2
3
4
5
6
7
8
9
import os

from app import MyApp

config_name = os.getenv("FLASK_CONFIG") or "default"
app = MyApp.create_app(config_name)

if __name__ == '__main__':
    app.run(host="0.0.0.0", port=5000)

Conclusion

If you read thus far, well done and thank you!

I’ll continue sharing more about Python and Azure as I work with it.

Save my website in your bookmarks!

Credit: Logos of the header images are from WorldVectorLogo and SVGRepo. You can find the original images here and there: I built the image with Sketchpad of Sketch.io.

Licensed under CC BY-NC-SA 4.0
License GPLv3 | Terms
Built with Hugo
Theme Stack designed by Jimmy