Beyond the Event Horizon: Multivariate Time Series with Qlik Predict
- Igor Alcantara
- 6 days ago
- 8 min read

Imagine your data as a cluster of clocks orbiting a massive gravity mass: sales flowing by the minute, inventory pulsing in discrete beats, marketing spend stretching and contracting like spacetime around the edge of a black-hole. On their own, each clock tells a fragment of the story; together, their shifting rhythms bend forecasts the way relativity bends light, revealing hidden paths through uncertainty. Multivariate time-series forecasting captures that curvature, mapping how one variable’s acceleration drags the rest along. Qlik Predict is about to put that power on your dashboard, no PhD in astrophysics required. No need to hire Christopher Nolan.
Yes, you read me right: Time Series is coming to Qlik Predict, previously known as Qlik AutoML, and I will tell you why this is huge. This is the 5th article in my series about the theory behind Qlik Predict. I started explaining the explanation of Machine Learning predictions through SHAP Statistics, then I wrote about Qlik Predict Preprocessing tasks. In my last article of 2024, I explored how a model performance is calculated. Two months ago, in the dawn of the release of Data Drifting, I wrote the 4th article in the series about this same topic. Now, it's time to explore the world of Time Series and, as usual, I will drop a few pop culture references. In this case, since we are talking about Time, I picked Interstellar, Nolan's masterpiece that combines science and art in a very unique way. Alright, alright, alright, buckle up, we're about to lift off.
The Foundation: Understanding Time Series
At its core, time series data represents observations collected sequentially over time, where the temporal ordering carries critical information. Unlike static datasets where you might analyze customer demographics at a single point in time, time series data captures the evolution and change of phenomena as they unfold. Think of it as creating a temporal map of reality, each data point serves as a coordinate not just in space, but in time itself.
Time series data surrounds us in every aspect of modern life. Financial markets generate millions of price points every second, weather stations continuously monitor atmospheric conditions, healthcare devices track patient vitals in real-time, and manufacturing sensors measure everything from temperature to vibration patterns. The key characteristic that distinguishes time series data from other forms of information is its inherent temporal dependency. The key thing to understand here is that (and I will make it bold to highlight it even more) the value at any given moment is often influenced by what came before it.
The mathematical foundation of time series analysis rests on the principle that observations close together in time tend to be more closely related than those further apart. This temporal correlation, known as autocorrelation, forms the backbone of time series forecasting methods. When we observe patterns in how variables change over time, we can often project these patterns forward to anticipate future values.
Does it all sound too confusing? Let's summarize it. At its simplest, a time series is a sequence of observations ordered in equally (or near-equally) spaced intervals: daily closing prices, hourly call-center volumes, minute-by-minute sensor temperatures. Time adds structure: yesterday influences today, seasonality repeats, shocks reverberate. That simple. The key properties of a Time Series are:
Order matters – Yesterday influences today, today influences tomorrow.
Temporal patterns – Seasonality, trends, cycles, and sudden shocks.
Autocorrelation – Observations often resemble their recent past.
The Tesseract of Machine Learning
If you watch the movie Interstellar, you remember the black hole in the center of this unique planetary system: the Gargantua. Beyond the event horizon lives a four-dimensional hypercube: a tesseract (Spoiler alert). Inside of that, Cooper discovers he can slide along one slender axis of Murph’s bedroom timeline or fan out to view the infinite lattice of moments where gravity, emotion, and chance intersect; that contrast is the perfect shorthand for time-series analysis. Follow one solitary strand through time and you have a univariate series, useful but narrow, while step back to observe several intertwined strands and you enter the realm of multivariate series, where temperature, demand, and marketing spend coexist like shelves in Murph’s library, each influencing the others and revealing patterns a lone timeline could never expose.
Traditional forecasting treats each series alone (univariate). That works when one variable tells most of the story. Think daily sales for a single product with steady marketing and price. Multivariate simply means “many variables.” A multivariate time series (MTS) captures two or more variables recorded over the same timeline. Instead of a single column of values, your dataset becomes a matrix where each column is its own series: temperature, humidity, and energy demand logged every 15 minutes, for example. The power of MTS lies in its ability to learn the relationships (lags, leads, co-movements) among those columns. The next table summarizes some of these concepts. You can skip it in case you do not wish to get too technical.
Concept | Why it matters in MTS |
Lag | How past values of one variable predict future values of itself or others (e.g., yesterday’s ad spend affecting tomorrow’s web traffic). |
Seasonality | Repeating patterns that multiple variables may share (holiday effects on both store visits and returns). |
Cointegration | Different series drifting yet tied together by a long-run equilibrium (exchange rates of closely linked economies). |
Granger causality | Statistical tests indicating whether one series has predictive content for another. |
Regime shifts | Structural breaks where relationships change (a new pricing policy, a supply-chain disruption). |
Beyond the Event Horizon
Multivariate time series analysis shines in capturing the complex interdependencies that govern real-world systems. In financial markets, stock prices don't move in isolation, they respond to interest rates, trading volumes, market sentiment, economic indicators, and countless other factors. Traditional univariate models might miss these crucial relationships, potentially overlooking signals that could improve forecasting accuracy.
The mathematical representation of multivariate time series involves vector-valued observations at each time point, where each component of the vector represents a different variable. This creates a matrix structure where rows represent time periods and columns represent different variables, allowing analysts to examine both the temporal evolution of individual variables and their cross-correlations.
The power of multivariate analysis becomes particularly evident in scenarios where exogenous variables influences endogenous ones. Weather conditions serve as exogenous variables that significantly impact energy demand patterns, yet this relationship might remain hidden in a univariate analysis of energy consumption alone. When you incorporate temperature, humidity, wind patterns, and seasonal factors into a multivariate model, energy companies can achieve more accurate demand forecasts and optimize their distribution strategies.
The Deep Learning Revolution in Time Series Forecasting
Recent advances in deep learning have revolutionized multivariate time series forecasting by enabling models to automatically discover complex, nonlinear relationships between variables. Neural networks can identify subtle patterns and interactions that traditional statistical methods might miss, particularly in high-dimensional datasets where dozens or hundreds of variables interact simultaneously.
Deep-learning models have pushed time-series forecasting far beyond the straight lines of classic statistics. Convolutional networks scan temporal data the way they scan an image, but with “causal” filters that only look backward in time, which is ideal for spotting local ripples like the shallow waves on the planet near the black hole (another Interstellar reference). Long Short-Term Memory (LSTM) networks add a memory unit, so they can keep track of long-delay effects the way gravity keeps Cooper glued to his daughter across decades. Transformers go further, using attention to decide which past moments or which variables really matter right now, a bit like relativity’s idea that time stretches differently depending on where you’re standing. Finally, foundation models train on oceans of assorted data, giving you a wormhole-style shortcut: they arrive knowing enough patterns that you can forecast accurately with less historical data of your own.
Technical Implementation and Model Selection
The success of multivariate time series forecasting depends heavily on selecting appropriate modeling techniques for specific applications. While I am writing this article, MTS is not yet available in Qlik Predict. It is predicted (pun absolutely intended) to be available sometime in Q3 2025. The following table lists the best candidates for Algorithms Qlik Predict will use for its Time Series forecasting. It is based on what was released so far and also according to the voices from my head that clearly could be just myself from the future behind a bookshelf. The list does not include classics like ARIMA and ARIMAX because these are univariate or semi-multivariate, but some of the most modern and effective ways to perform MTS, letting you pick the right balance between interpretability, horizon length, and speed.
Technique | Model family | Multivariate support | What it does best | Typical limits / watch-outs |
DeepAR | Recurrent neural network (auto-regressive LSTM) | Learns a single global model across many related series; accepts multiple covariates and can output several targets | Probabilistic forecasts, cold-start handling, strong on medium-range horizons where patterns repeat across items | Needs plenty of historical data; RNNs can struggle with very long sequences and may lag newer attention-based models on extreme horizons |
TiDE (Time-series Dense Encoder) | Hybrid encoder–decoder with dense (MLP + gating) blocks | Fully multivariate; designed for long-horizon, many-variable settings | Memory-efficient training, quick inference, excels on long-range forecasts without the quadratic cost of full attention | Still cutting-edge—fewer off-the-shelf best-practice guides; results can be sensitive to architecture and learning-rate choices |
TSMixer (Time-series Mixer) | MLP-Mixer variant (token & channel mixing through stacked feed-forward layers) | Fully multivariate; token mixing captures cross-time, channel mixing captures cross-variable effects | Parameter-light, fast on GPUs/CPUs, performs well when you need dozens of variables and very long forecast windows | May miss very fine-grained local patterns; less intuitive than recurrent models; hyper-parameter tuning critical |
TFT (Temporal Fusion Transformer) | Attention-based transformer with gating & variable-selection layers | Fully multivariate; supports static, time-varying, and “known-future” covariates | Multi-horizon forecasting with built-in explainability (variable importance, attention heat-maps); robust to mixed-frequency inputs | Heavy compute footprint; data-hungry; complex architecture can over-fit small datasets and requires careful regularisation |
Challenges and Considerations
Despite their power, multivariate time series models face several challenges that analysts, scientists, and developers must carefully consider. The curse of dimensionality (which will be a future article) becomes pronounced as the number of variables increases, potentially leading to overfitting when insufficient historical data is available. Careful feature selection and regularization techniques help mitigate these risks.
The temporal alignment of variables presents another challenge, particularly when different variables are measured at different frequencies or with different delays. Careful consideration of lead-lag relationships and appropriate resampling techniques ensure that models capture true relationships rather than artifacts of data collection processes.
Model validation in multivariate time series requires specialized techniques that respect the temporal structure of the data. Traditional cross-validation approaches that randomly split data can lead to data leakage, where future information inadvertently influences past predictions. Time-series-specific validation techniques, such as rolling window validation, provide more realistic performance estimates.
Luckily, based on what know about Qlik Predict and the great team behind it, I know most if not all of these challenges will be addressed and you will be able just to relax and enjoy the view.
The Future of Temporal Intelligence
Time and space are interconnected dimensions that can be navigated and understood. Using this analogy, the future of analytics lies in recognizing the multidimensional nature of data and its temporal evolution. Multivariate time series analysis represents a fundamental shift from viewing variables in isolation to understanding them as part of complex, interconnected systems that evolve together through time.
The addition to MTS to Qlik Predict brings the platform to a whole new level. The possibilities are now much greater. Recently, I needed to create a Qlik Predict model to predict how much products would be sold on a daily basis. Without Time Series, the workaround I found was to create multiple regression models, one for tomorrow, another for 2 days from now and so on. It is not only ideal but it also consumes a lot of the Deployment capacity. Qlik Time Series I can get a much faster, robust, and accurate solution.
The democratization of these capabilities through Qlik Predict signals a transformation in how organizations approach forecasting and decision-making. When advanced multivariate analysis becomes accessible to business users rather than being confined to specialized data science teams, it enables faster iteration, domain-specific insights, and more agile responses to changing conditions.
As we stand on the brink of this new era in predictive analytics, the ability to understand and forecast the complex interplay of variables across time will become a critical competitive advantage. Organizations that embrace multivariate time series analysis will be better positioned to anticipate market changes, optimize operations, and respond proactively to emerging challenges and opportunities.
In this new paradigm, time truly becomes a dimension we can navigate, not physically, but analytically. Using the power of multivariate time series analysis to peak into possible futures and make better decisions in the present. The future of predictive intelligence is multivariate, and it's arriving sooner than you might think.
Comments