A classic prediction problem from finance is to predict the next returns (i.e. relative price variations) from a stock market.
That is, given a stock market of N stocks having returns RtโโRN at time t,
the goal is to design at each time t a vector St+1โโRN from the information available up to time t
such that the prediction overlap โจSt+1โ,Rt+1โโฉ is quite often positive.
To be fair, this is not an easy task.
In this challenge, we attack this problem armed with a linear factor model where one learns the factors over an exotic non-linear parameter space.
More precisely, the simplest estimators being the linear ones, a typical move is to consider a parametric model of the form
St+1โ:=โ=1โFโฮฒโโFt,โโ
where the vectors Ft,โโโRN are explicative factors (a.k.a. features), usually designed from financial expertise,
and ฮฒ1โ,โฆ,ฮฒFโโR are model parameters that can be fitted on a training data set.
But how to design the factors Ft,โโ?
Factors that are โwell knownโ in the trading world include the 5-day (normalized) mean returns Rt(5)โ
or the MomentumMtโ:=Rtโ20(230)โ, where Rt(m)โ:=mโ1โโk=1mโRt+1โkโ.
But if you know no finance and have developed enough taste for mathematical elegance, you may aim at learning the factors themselves within the simplest class of factors,
namely linear functions of the past returns:
Ft,โโ:=k=1โDโAkโโRt+1โkโ
for some vectors Aโโ:=(Akโโ)โRD and a fixed time depth parameter D.
Well, we need to add a condition to create enough independence between the factors, since otherwise they may be redundant.
One way to do this is to assume the vectors Aโโ's are orthonormal, โจAkโ,Aโโโฉ=ฮดklโ for all k,โ, which adds a non-linear constraint to the parameter space of our predictive model.
All in all, we thus have at hand a predictive parametric model with parameters:
a DรF matrix A:=[A1โ,โฆ,AFโ] with orthonormal columns,
a vector ฮฒ:=(ฮฒ1โ,โฆ,ฮฒFโ)โRF.
Note that it contains the two-factor model using Rt(5)โ and Mtโ defined above,
or the autoregressive model AR from time series analysis, as submodels.
Challenge goals
The goal of this challenge is to design/learn factors for stock return prediction using the exotic parameter space introduced in the context section.
Participants will be able to use three-year data history of 50 stock from the same stock market (training data set) to provide the model parameters (A,ฮฒ) as outputs.
Then the predictive model associated with these parameters will be tested to predict the returns of 50other stocks over the same three-year time period (testing data set).
We allow D=250 days for the time depth and F=10 for the number of factors.
Metric.
More precisely, we assess the quality of the predictive model with parameters (A,ฮฒ) as follows. Let R~tโโR50 be the returns of the 50 stocks of the testing data set over the three-year period (t=0โฆ753)
and let S~tโ=S~tโ(A,ฮฒ) be the participants' predictor for R~tโ. The metric to maximize is defined by
if โฃโจAiโ,Ajโโฉโฮดijโโฃโค10โ6 for all i,j and Metric(A,ฮฒ):=โ1 otherwise.
By construction the metric takes its values in [โ1,1] and equals to โ1 as soon as there exists a couple (i,j) breaking too much the orthonormality condition.
Output structure. The output expected from the participants is a vector where the model parameters A=[A1โ,โฆ,A10โ]โR250ร10 and ฮฒโR10 are stacked as follows
The training input given to the participants Xtrainโ is a dataframe containing the (cleaned) daily returns of 50 stocks over a time period of 754 days (three years).
Each row represents a stock and each column refers to a day. Xtrainโ should be used to find the predictive model parameters A,ฮฒ.
The returns to be predicted in the training data set are provided in Ytrainโ for convenience, but they are also contained in Xtrainโ.
Benchmark description
A possible "brute force" procedure to tackle this problem is to generate orthonormal vectors A1โ,โฆ,A10โโR250 at random and then to fit ฮฒ on the training data set by using linear regression,
to repeat this operation many times, and finally to select the best result from these attempts.
More precisely, the QRT benchmark strategy to beat is (see the notebook in the supplementary material):
Repeat Niterโ=1000 times the following.
Sample a 250ร10 matrix M with iid Gaussian N(0,1) entries.
Apply the Gram-Schmidt algorithm to the columns of M to obtain a matrix A=[A1โ,โฆ,A10โ] with orthonormal columns (see the randomA function).
Use the columns of A to build the factors and then take ฮฒ with minimal mean square error on the training data set (with fitBeta).
Compute the metric on the training data (metricTrain).
Return the model parameters (A,ฮฒ) that maximize this metric.
Remark: The orthonormality condition for the vectors A1โ,โฆ,AFโ reads ATA=IFโ for the matrix A:=[A1โ,โฆ,AFโ]. The space of matrices satisfying this condition is known as the Stiefel manifold, a generalization of the orthogonal group,
and one can show that the previous procedure generates a sample from the uniform distribution on this (compact symmetric) space.
Files
Files are accessible when logged in and registered to the challenge
The challenge provider
Qube Research & Technologies Group is a quantitative and systematic investment manager employing around 300 people with offices in Hong Kong, London, Mumbai, Paris and Singapore. We are a technology driven firm implementing a scientific approach to financial investment. QRTโs market presence is global and expands across the largest liquid electronic venues. The combination of data, research, technology and trading expertise has shaped our DNA and is at the heart of our innovation and development dynamic. The firm acts as an investment manager managing open-ended Funds used for management of third party capital.