Developer Community Blog

Deploying Machine Learning Models with Flask: A Step-by-Step Guide

Published: October 27, 2023 By: Alex Chen Category: Machine Learning, Web Development

Deploying a machine learning model into production can be a crucial step in realizing its value. While there are many frameworks and platforms available, using Flask, a lightweight Python web framework, offers a straightforward and flexible approach, especially for smaller to medium-sized projects or for quickly prototyping API endpoints for your models.

In this guide, we'll walk through the process of wrapping your trained ML model in a Flask API so it can receive requests and return predictions.

Prerequisites

Step 1: Set Up Your Project Environment

First, create a new directory for your project and navigate into it. It's good practice to use a virtual environment to manage your dependencies.


mkdir flask_model_deployment
cd flask_model_deployment
python -m venv venv
source venv/bin/activate  # On Windows use `venv\Scripts\activate`
            

Step 2: Install Necessary Libraries

You'll need Flask to build the web server and libraries to load your model and handle data. If you are using libraries like scikit-learn, pandas, or numpy, ensure they are installed.


pip install Flask scikit-learn pandas joblib
            

Step 3: Prepare Your Model and Data Handling

Assume you have a trained model file named model.pkl in your project directory. You'll also need a way to preprocess incoming data to match the format your model expects. For this example, let's imagine a simple model that predicts a numerical value based on a few input features.

Create a file named app.py and add the following code:


import joblib
import pandas as pd
from flask import Flask, request, jsonify

app = Flask(__name__)

# Load the trained model
# Make sure 'model.pkl' is in the same directory or provide the correct path
try:
    model = joblib.load('model.pkl')
    print("Model loaded successfully.")
except FileNotFoundError:
    print("Error: model.pkl not found. Please ensure the model file is in the correct path.")
    model = None
except Exception as e:
    print(f"An error occurred while loading the model: {e}")
    model = None

# Placeholder for preprocessing if needed
def preprocess_input(data):
    # Convert input dictionary to a pandas DataFrame
    # This should match the structure your model was trained on
    # Example: {'feature1': value1, 'feature2': value2}
    df = pd.DataFrame([data])
    # Add any further preprocessing steps here (e.g., scaling, encoding)
    return df

@app.route('/')
def home():
    return "Welcome to the Model Deployment API!"

@app.route('/predict', methods=['POST'])
def predict():
    if model is None:
        return jsonify({'error': 'Model is not loaded or failed to load'}), 500

    try:
        # Get data from POST request
        data = request.get_json()

        if not data:
            return jsonify({'error': 'Invalid input: No JSON data provided'}), 400

        # Preprocess the input data
        processed_data = preprocess_input(data)

        # Make prediction
        prediction = model.predict(processed_data)

        # Return the prediction
        # Assuming prediction is a single value or an array of values
        return jsonify({'prediction': prediction.tolist()})

    except Exception as e:
        return jsonify({'error': str(e)}), 500

if __name__ == '__main__':
    # For development:
    # app.run(debug=True, host='0.0.0.0', port=5000)
    # For production, consider using a production-ready WSGI server like Gunicorn or uWSGI
    app.run(host='0.0.0.0', port=8000)

            

Step 4: Create a Dummy Model File (for testing)

If you don't have a model file yet, you can create a simple one for testing purposes. Create a new Python file, e.g., create_dummy_model.py:


import joblib
from sklearn.linear_model import LinearRegression
import pandas as pd

# Create dummy data
data = {'feature1': [1, 2, 3, 4, 5], 'feature2': [2, 4, 5, 4, 5]}
target = [3, 6, 8, 8, 10]
df = pd.DataFrame(data)
y = pd.Series(target)

# Train a simple model
model = LinearRegression()
model.fit(df, y)

# Save the model
joblib.dump(model, 'model.pkl')
print("Dummy model.pkl created successfully.")
            

Run this script once: python create_dummy_model.py.

Step 5: Run Your Flask Application

Now, start your Flask development server:


python app.py
            

Your API should now be running on http://localhost:8000/.

Step 6: Test Your API

You can use tools like curl or Postman to send POST requests to your API.

Using curl in your terminal:


curl -X POST -H "Content-Type: application/json" -d '{"feature1": 6, "feature2": 7}' http://localhost:8000/predict
            

The expected output would be a JSON object containing the prediction:


{"prediction": [11.6]}
            

Deployment Considerations

While the above steps get your model running locally, for production deployment, you'll want to consider:

Flask provides a solid foundation for building robust ML deployment APIs. With proper setup and deployment strategies, you can serve your models efficiently to users and applications.

Explore More Articles