# Challenge Data

### Deep Hedging for an Equinoxe by Natixis

#### Description ##### Dates

Started on Jan. 6, 2020

# Deep Hedging of an Equinox

In this project, we construct optimal portfolios that hedge an option, the Equinox, issued by the bank to an investment manager who wants to invest with the guaranty to maintain his capital . The investor is willing to pay a premium, as small as possible, to benefit from the guaranty.

The equinox is a long term option that pays :

$payoff = 1_{S_R>=B} \, G+1_{S_R

where

$S_t$ is the value of the underlying asset (individual stock, stock index,...) at time t . T is the maturity of the option, and K is typically the forward price of the underlying asset. Usually the maturity of such option is 10 years. For this project the maturity is reduced to 5 years to lower the hedging time, and thefore to lower the complexity of the optimization.

Usually, such an option is expensive. The autocall feature lowers its price. At an intermediate date R<T, the bank recalls the option if anf only if the value of the underlying asset is greater than a barrier B. If at the reset date R, the value of the underlying is lower than the barrier, then the Equinox becomes a standard European call option.

G is the rebate paid by the bank to the investor when the Equinox is recalled.

This barrier is usually close to 100% of the spot value of the underlying at the origination date. If the underlying price at the reset date is above this barrier, the investor receives a compensation G = 5% of the notional. Using a Black-Sholes model, the Equinox, with a 20% volatility for the underlying, is sold at 6.1% of the notionnal amount. An ordinary option would sell at 17 %. It makes such Equinox option interesting for the investor in order to capture the long term performance of an asset, especially if you think that the market may experience a medium term low.

The Black and Scholes model (lognormality of the price process for the underlying) used to value the option is a very crude approximation because of the following facts:

1. The black Scholes assumption is very crude. A more realistic model should include stochastic volatility with a stochastic correlation between the spot and the spot volatility.

2. The dynamic hedging argument that value the option as the replication cost is only valid for continuous rehedging. In reality, hedging is done at discrete times. For the projet we assume that it is donne weekly, to make computation doable with limited computing ressources.

3. Transaction costs are not taken into account in the martingale based pricing model. In reality transaction costs can be expensive. In this case, we assume that they are equal to 0.01 percent of the value of the transaction.

4. Vega hedging is needed with stochastic volatility models It is enforced by the risk control of the bank and the regulatory authorities.

The goal of dynamic hedging is to reshape the risk distribution in the martingale based approach of derivative valuation. Without dynamic hedging the bank would have to post capital to absorb the risk associated with selling this option. Veaga rehedging is assumed to happen only periodically .

Vega hedging is done by buying or selling the same Europan call option of maturity T and strike K.

For the sake of simplicity, hedging is done only at two dates: at origination of the deal (time = 0) and at the reset date,, R.

The vega trades are usuall very expensive because of transaction costs, mainly the bid-offer spread. This is why vega hedging is done in discrete times.

It is important to understand that those call options used to vega hedge come with their own delta hedging. Every week the combined position: Equinox + vanilla option, is delta rehedged. The rehedging generates transaction costs.

In this project, we assume that the transaction cost associated with the vanilla options is 2 percent of the value of the option.

## Hedging policy

In this project we assume that the risk control of the bank is setting limits for delta and vega exposures.

The delta limit is enforced weekly and the vega limit is enforced only at the origination of the deal and at the reset date.

For the sake of simplity, we assume that the risk limit for vega is 0, while we consider 3 risk limits for delta: L = 10%, L =20% and L=50%.

Therefore the only limit which is to be enforced is :

$Abs(\Delta_{equinox} + \Delta_{vanilla} * n_{vegahedge} + H_{hedge}) < L$

In this projets, the paticipants are given the Equinox delta, the vanilla delta and the vega hedge. The participants need to determine the number of underlying, H_hedge, that needs to be traded to maximize a utility function that is described below.

The risk control of the bank uses a Black and Scholes type of model to generate the deltas and vegas. The proxy for implicit volatility of the pricing models is the instantaneous volatility generated by a market model. This market model assumes a more realistic dynamics of the value of the underlying asset.

By providing all the necessary computations in the dataset the participants to this challenege will avoid any stochastic calculation.

## The Market Model

Trajectories are generated by a stochastic model. A realistic model should have its drift controled such that it is martingale in the delta neutral measure of the other assets. In the current market environment, this implies a negative drift in forward prices of 3% per year in average. This negative drift corresponds to the average dividend rate and the repo rate.

The volatility in this model is stochastic but its process does not need to be a martingale, because no tradable security pays it. It is assumed here to be a mean reverting process. Alternative models assume a volatility term structure.

We also assume that the instantaneous correlation between the underlying and the spot volatility is stochastic and follows a mean reverting process,

Details about the model are given in the following document :

We assume that the initial value of the underlying is equal to 1. This facilitates the computations and makes life easier for the optimizers. We also provide initial values for the spot volatility and the instantaneous correlation. The negative value of the correlation is related to the frequent observation of an inverse relationship between the direction of the market and its volatility.

## Theoretical considerations needed to solve the optimization described above

The project simulates the cost of hedging the Equinox option along each trajectory (this cost is also referred to as the replication cost of the equinox):

$CR = \Gamma(S_1,S_2,...,S_n)-\sum_{k=1}^n \delta_{k-1} (S_k-S_{k-1})+TrCost(\delta_k-\delta_{k-1})+VegaHdg$

We then obtain a distribution of the hedging cost. The purpose of the project is to derive the optimal hedging strategy that satistifies the risk limits and maximizes the following utility function.

$U(X) =-\frac{1}{\lambda} log \; \mathbb{E}[exp(- \lambda X)]$

where $X = - CR$

and
$\lambda$ relates to the trade-off between risk and return.

For the project,

$\lambda = 1$

## References

The main reference for this topic is the paper: Hans Bühler, Lukas Gonon, Josef Teichmann, Ben Wood: Deep Hedging, arXiv:1802.03042.

We also advise to consult the book: Hans Follmer, Alexander Schied : Stochastic Finance, an introduction in discrete time, Walter de Gruyter, 4th edition

A nice presentation of these matters can also be found there : SwissQuote 2018

# Goal of the Challenge

The goal is to determine 3 strategies associated with 3 different limits L:

1. first case : L = 0.1

2. Second case : L = 0.2

3. Third case: L = 0.5

## Submission

A submission consists,for each of the 3 limits and for each of the 5000 trajectories, of the optimal delta hedging strategy. For each limit, an average of the replication cost is computed. The score of your submission is the average of these 3 averages.

In case of a tie, an additional measure is computed: the expected shortfall. If two participants have the same score, the winner is the one with the expected shortfall associated with the 1% limit.

A submission consists therefore in a CSV file made of 15000 X 260 real numbers, i.e., 5000 X 260 real numbers for each limit.

## Constraints on the submission

The submission has to represent a hedge in delta for the vega hedged Equinox. Therefore, the following constraints should be enforced:

1. For each trajectory, the sum of the deltas should be equal to the payoff of the Equinox on the trajectory, which depends, of course, on the exercise of the callability.

2. At each date the averall delta is the sum of the delta of the option, the delta of the vega hedge and H_hedge. This sum should be less than the limit L.

# Structure of the data set

There are several datasets available:

The official one is X_Validation_new2.CSV

The official data set is made of a CSV file of 5000 lignes. Each ligne includes the following information :

• 261 Values of $S_t$ : the spot price.

• 261 values of $\alpha_t$ : the instantaneous volatility of the underlying at time t, and also the proxy for implied volatilty. (cf. attached paper) - not needed for this project.

• 261 values of $\lambda_t$ : the parameter of the instantaneous correlation process between the underlying and its volatility. (cf. attached paper) - not needed for this project.

• 261 values of $X_t$ : the autocall value at time t.

• 261 values of $\left(\frac{\partial X}{\partial S} \right)_t$ : the delta of the autocall at time t.

• 261 values of $\left(\frac{\partial X}{\partial \sigma} \right)_t$ : the vega of the autocall.

• 261 values of $Z_t$ : the vanilla call option value at time t.

• 261 values of $\left(\frac{\partial Z}{\partial S} \right)_t$ : the delta of the the vanilla call option at time t.

• 261 values of $\left(\frac{\partial Z}{\partial \sigma} \right)_t$ : the vega of the the vanilla call option at time t.

If you run into overparametrization problems which generate unstabilities, it may be interesting to use additional trajectories to stabilize the optimization. The suplementary file contains a set of 25 000 trajectories (together with the associated data described above) to help you stabilizing the optimization.

The X-train dataset is the same as the X-test dataset. Training can be done only with the X-test. But it is advised to use the extended X_Supplementary_Training_new2.csv included in the supplementary files in order to obtain better stability while optimizing.

The time series used in this dataset are not arbitrage-free. They are given in a historical measure instead of a the delta neutral measure. The benchmark, described below, does'nt generate any profit. Will you be smart enough to exploit those arbitrages and make the autocall business profitable while respecting the risk policy?

# Benchmark

There is a whole range of possible strategies that deliver the payoff. Among all those strategies, there is a strategy that minimizes the expected replication cost, but its risk profile is costly according to the expected utility measure.

This is the benchmark used for this project.

The average replication cost for the 3 limits is 7.6%, which is very high. This result comes from the disretization of the hedge (which makes the replication less accurate), the transaction costs and the wrong computation of the hedge ratios due to the use of the wrong model (Black and Scholes). But above all, it comes from the vega hedging which is imposed by the risk control.

#### Files

Files are accessible when logged in and registered to the challenge

#### The challenge provider Investment banking