Logistic Regression with Jupyter Notebook Link to heading

4. Model Deployment Link to heading

Saving the Model Link to heading

Save the model using pickle or joblib.

import pickle

# Save the model to a file
with open('logistic_regression_model.pkl', 'wb') as f:
    pickle.dump(best_model, f)

Creating an API Link to heading

We wil use Flask to set up a simple web service. Since the training data was augmented with interaction terms and transformed features, the test data must also include these additional features in the exact same way. Here’s how you can modify the test input data accordingly:

Step-by-Step Solution Link to heading

  1. Ensure Consistent Feature Engineering: The test data must undergo the same feature engineering steps that were applied to the training data to maintain consistency.
  2. Add Interaction and Logarithmic Features: Add the interaction terms and logarithmic transformations to the test input data before passing it to the model.
from flask import Flask, request, jsonify
import pickle
import numpy as np

app = Flask(__name__)

# Load the trained model
with open('logistic_regression_model.pkl', 'rb') as f:
    model = pickle.load(f)

# Define the expected number of features
EXPECTED_FEATURE_COUNT = 7

@app.route('/predict', methods=['POST'])
def predict():
    # Extract features from the request
    data = request.json['features']
    
    # Check if the input has the expected number of original features (4 in this case)
    if len(data) != 4:
        return jsonify({'error': f'Expected 4 original features, but got {len(data)} features.'}), 400
    
    # Generate the additional features that were created during training
    sepal_length, sepal_width, petal_length, petal_width = data
    interaction_sepal = sepal_length * sepal_width
    interaction_petal = petal_length * petal_width
    log_sepal_length = np.log(sepal_length)
    
    # Combine all features into a single input array
    full_features = [sepal_length, sepal_width, petal_length, petal_width,
                     interaction_sepal, interaction_petal, log_sepal_length]
    
    # Ensure the total feature count matches what the model expects
    if len(full_features) != EXPECTED_FEATURE_COUNT:
        return jsonify({'error': f'Expected {EXPECTED_FEATURE_COUNT} features, but got {len(full_features)} features.'}), 400
    
    # Prediction
    prediction = model.predict([full_features])
    return jsonify({'prediction': int(prediction[0])})

if __name__ == '__main__':
    app.run(debug=True)

Explanation of the Code: Link to heading

  • Feature Extraction and Transformation: The code extracts the original features, then computes the interaction terms (sepal length * sepal width and petal length * petal width) and the logarithmic transformation (log sepal length).
  • Validation: It checks if the resulting feature array matches the expected number of features that the model was trained on (7 in this example).
  • Prediction: The adjusted feature array is used for prediction, ensuring consistency between training and inference.

Deploying to a Server Link to heading

Deploying the API involves setting up a server to host your Flask application. This can be done on cloud services like AWS, Azure, or Google Cloud, or on a local server. The process generally includes configuring the server, installing necessary dependencies, and ensuring the API is accessible over the internet.

To run and test the Flask application that loads a logistic regression model and provides a prediction API, follow these step-by-step instructions:

Step 1: Set Up Your Environment Link to heading

  1. Install Flask and Pickle Ensure that Flask and other required libraries are installed. You can install Flask using pip:
pip install flask

Make sure that the pickle module is also available in your Python environment, though it is included with the standard library, so no separate installation is needed.

  1. Ensure the Model File Exists Verify that the file logistic_regression_model.pkl (which contains your trained model) is in the same directory as your Flask application script. If the file is located elsewhere, update the path in the code accordingly.

Step 2: Run the Flask Application Link to heading

  1. Save the Code in a Python File Save your Flask code in a file named, for example, app.py.
  2. Run the Application Start the Flask server by running the following command in your terminal:
python app.py

When the server starts successfully, you should see output indicating that the server is running, typically something like:

* Running on http://127.0.0.1:5000 (Press CTRL+C to quit)

Step 3: Test the API Link to heading

You can test the prediction endpoint using curl from the terminal. Send a POST request with a JSON payload containing the feature values:

curl -X POST http://127.0.0.1:5000/predict -H "Content-Type: application/json" -d '{"features": [5.1, 3.5, 1.4, 0.2]}'

Replace [5.1, 3.5, 1.4, 0.2] with the actual feature values for which you want to get predictions.

Sent request should return the prediction in the form:

json" -d '{"features": [5.1, 3.5, 1.4, 0.2]}'
{
  "prediction": 0
}