Home Artificial Intelligence Pumpkin Spice Time Series Evaluation Iterative Modeling Method: Eyeballing the information: Imports: Function-Based Approach to the Code: Final Model: Further Validation: Conclusion:

Pumpkin Spice Time Series Evaluation Iterative Modeling Method: Eyeballing the information: Imports: Function-Based Approach to the Code: Final Model: Further Validation: Conclusion:

0
Pumpkin Spice Time Series Evaluation
Iterative Modeling Method:
Eyeballing the information:
Imports:
Function-Based Approach to the Code:
Final Model:
Further Validation:
Conclusion:

Throw in your comfiest lo-fi, grab an oversized sweater, your favorite hot beverage, and let’s python.

Towards Data Science
Photo by Nathan Dumlao on Unsplash

It’s that point again within the northern hemisphere — a time for apples, pumpkins, and various configurations of cinnamon, nutmeg, ginger, allspice, and cloves. And because the grocery isles start preparing for Halloween, Thanksgiving, and the winter holidays, it’s an incredible time to dust off my statistical modeling skills. Hold onto your seasoned lattes, and let’s do some function-oriented seasonal modeling. The complete code notebook may be found here.

Hypothesis:

Pumpkin Spice’s popularity as a Google searched term within the USA can have strong seasonality because it’s related to American Fall Holidays and seasonal food dishes.

Null hypothesis:

Using last week’s or last yr’s data might be more predictive of this week’s level of recognition for the search term “pumpkin spice.”

Data:

The last 5 years of information from Google Trends, pulled on the seventh of October, 2023. [1]

  • Make a naive model where last week’s/last yr’s data is that this week’s prediction. Specifically, it’s not enough for my final model to be accurate or inaccurate in a void. My final model must outperform using historical data as a direct prediction.
  • The train test split will give me two sets of information, one for the algorithm to learn from. The opposite is for me to check how well my algorithm performed.
  • Seasonal decomposition will give me a rough idea of how predictable my data is by attempting to separate the yearly overall trend from the seasonal patterns and the noise. A smaller scale of noise will imply that more of the information may be captured in an algorithm.
  • A series of statistical tests to find out if the information is stationary. If the information is just not stationary, I’ll have to take a primary difference (run a time-delta function where every time interval’s data only shows the difference from the previous time interval’s data. It will force the information to grow to be stationary.)
  • Make some SARIMA models, using inferences from autocorrelation plots for the moving average term, and inferences from partial auto-correlation plots for the autoregressive term. SARIMA is a go-to for time series modeling and I’ll be trying ACF and PACF inferencing before I try a brute-force approach with Auto Arima.
  • Try using Auto Arima, which can iterate through many terms and choose one of the best combination of terms. I need to experiment to learn if the parameters it gives me for a SARIMA model yield a better-performing model.
  • Try ETS models, using inference from the seasonal decomposition as as to whether x is additive or multiplicative over time. ETS models focus more heavily on seasonality and overall trend than SARIMA family models do, and should give me an edge when capturing the connection pumpkin spice has to time.

Performance plotting KPIs:

  • Try using the MAPE rating since it’s an industry standard in lots of workplaces, and people could also be used to it. It’s easy to know.
  • Try using the RMSE rating since it’s more useful.
  • Plot predictions against the test data and visually check for performance.
Image by the writer.

As we will see from the above plot, this data shows strong potential for seasonal modeling. There’s a transparent spike within the second half of every year, with a taper and one other spike before a drop down into our baseline.

Nevertheless, every year’s primary spike is larger every year besides 2021, which is smart, given the pandemic, when folks may not have had celebrating the season on their minds.

Note: These imports appear in a different way within the notebook itself, as within the notebook I’m counting on seasonal_mod.py which has lots of my imports baked in.

Image by the writer.

These are the libraries I used to make the code notebook. I went for statsmodels as an alternative of scikit-learn for his or her time series packages, I like statsmodels higher for many linear regression problems.

I don’t learn about you but I don’t want to put in writing several lines of code every time I make a brand new model after which more code to confirm. So as an alternative I made some functions to maintain my code DRY and stop myself from making errors.

Image by the writer.

These three little functions work together so I only have to run metrics_graph()with y_true and y_preds because the input and it would give me a blue line of true data and a red line of predictive data, together with the MAPE and RMSE. That can save me time and hassle.

Using Last Yr’s Data as a Benchmark for Success:

My experience in retail management informed my decision to try last week’s data and last yr’s data as a direct prediction for this yr’s data. Often in retail, we used last season’s (1 unit of time ago’s) data as a direct prediction, to make sure inventory during Black Friday for instance. Last week’s data didn’t perform in addition to last yr’s data.

Image by the writer.

Last week’s data to predict this week’s data showed a MAPE rating of just over 18, with a RMSE of about 11. By comparison, last yr’s data as a direct prediction to this yr’s data showed a MAPE rating of nearly 12 with a RMSE of about 7.

Image by the writer.

Subsequently I selected to match all statistical models I built to a naive model using last yr’s data. This model got the timing of the spikes and reduces more accurately than our naive weekly model, nevertheless, I still thought I could do higher. The subsequent step in modeling was doing a seasonal decomposition.

The next function helped me run my season decomposition and I’ll be keeping it as reusable code for all future modeling moving forward.

Image by the writer.

The below shows how I used that seasonal decomposition.

Image by the writer.

The additive model had a reoccurring yearly pattern within the residuals, evidence that an additive model wasn’t in a position to completely decompose all of the recurring patterns. It was reason to try a multiplicative model for the yearly spikes.

Image by the writer.

Now the residuals within the multiplicative decomposition were way more promising. They were way more random and on a much smaller scale, proving that a multiplicative model would capture the information best. The residuals being so small — on a scale between 1.5 to -1, meant that there was lots of promise in modeling.

But now I wanted a function for running SARIMA models specifically, only inputting the order. I desired to experiment running c,t and ct versions of the SARIMA model with those orders as well because the seasonal decomposition favored a multiplicative form of model over an additive form of model. Using the c, t and ct within the trend = parameter, I used to be in a position to add multipliers to my SARIMA model.

Image by the writer.

I’ll skip describing the part where I checked out the AFC and PACF plots and the part where I also tried PMD auto arima to search out one of the best terms to make use of within the SARIMA models. For those who’re thinking about those details, please see my full code notebook.

My best SARIMA model:

Image by the writer.

So my best SARIMA model had a better MAPE rating than my naive model, nearly 29 to just about 12, but a lower RMSE by a few unit, nearly 7 to just about 6. My biggest problem with using this model is it really underpredicted the 2023 spike, there’s a good amount of area between the red and blue lines from August to September of 2023. There are reasons to prefer it higher than my yearly naive model or worse than my yearly naive model, depending in your opinions about RMSE vs MAPE. Nevertheless, I wasn’t done yet. My final model was definitively higher than my yearly naive model.

I used an ETS (exponential smoothing) model for my final model, which allowed me to explicitly use the seasonal parameter to make it use a multiplicative approach.

Image by the writer.

Now chances are you’ll be pondering “but this model has a better MAPE rating than the yearly naive model.” And also you’d be correct, by about 0.3%. Nevertheless, I believe that’s a greater than fair trade considering that I now have an RMSE of about 4 and a half as an alternative of seven. While this model does struggle a bit more in December of 2022 than my best SARIMA model, it’s off by less area amount for that spike than the larger spike for fall of 2023, which I care more about. You could find that model here.

I’ll wait until 10/7/2024 and do one other data pull and see how the model did against last yr’s data.

To sum up, I used to be in a position to disprove the null hypothesis, my final model outperformed a naive yearly model. I’ve proved that pumpkin spice popularity on Google may be very seasonal and may be predicted. Between naive, SARMA models, and ETS models, ETS was higher in a position to capture the connection between time and pumpkin spice popularity. The multiplicative relationship of pumpkin spice to time implies that pumpkin spice’s popularity is predicated on multiple independent variable besides time within the expression time * unknown_independant_var = pumpkin_spice_popularity.

What I Learned and Future Work:

My next step is to make use of some version of Meta’s graph API to search for “pumpkin spice” getting used in business articles. I’m wondering how correlated that data might be to my Google trends data. I also learned that when the seasonal decomposition points towards a multiplicative model, I’ll reach for an ETS much sooner in my process.

Moreover, I’m thinking about automating lots of this process. Ideally, I’d wish to construct a Python module where the input is a CSV directly from Google Trends and the output generally is a useable model with ok documentation that a nontechnical user could make and test their very own predictive models. On the eventuality that a user would pick data that is tough to predict (IE a naive or random walk model would suit higher), I hope to construct the module to clarify that to users. I could then collect data from an app using that module to showcase findings of seasonality across a number of untested data.

Look out for that app by pumpkin spice season of next yr!

[1] Google Trends, N/A (https://www.google.com/trends)

LEAVE A REPLY

Please enter your comment!
Please enter your name here