AI and Advanced Quant Analytics: Live & Online from Schonfeld
10.30 – 11.30: Decoding the Auto Encoder – A deep dive into how auto-encoder neural networks can effectively capture the dynamics of swap yield curves across multiple currencies.
Abstract We fit auto-encoder neural networks to swap yield curves in 10 different currencies and over a period of 15 years. We show that the different yield curve shapes can be captured well with only two to three factors. The same neural network can be used across all currencies. We further show that the resulting yield curves are close to dynamically arbitrage free and we identify the stochastic process of the short rate under the risk neutral measure that is consistent with the neural network.
Jesper Andreasen:
Head of Quantitative Analytics, Verition Fund Management LLC
Jesper Andreasen: Head of Quantitative Analytics, Verition Fund Management LLC
Jesper Andreasen is head of Quantitative Analytics at Verition Fund Management LLC. Jesper has previously held senior positions in the quantitative research departments of Saxo Bank, Danske Bank, Bank of America, Nordea, and General Re Financial Products. Jesper’s recent research focusses on efficient and accurate methods for computing credit and market risk. Jesper holds a PhD in mathematical finance from Aarhus University, Denmark. He received Risk Magazine’s Quant of the Year awards in 2001 and 2012, joint with Leif Andersen and Brian Huge respectively, and is an honorary professor of mathematical finance at Copenhagen University.
11.30 – 12.30: Autoencoder-Based Risk-Neutral Model for Interest Rates
A new risk-neutral pricing model using autoencoders for a more accurate representation of yield curves.
Alexander Sokol (joint work with Andrei Lyashenko and Fabio Mercurio)
- It is well known that historical yield curve shapes can be accurately represented using very few latent variables.
- The recent extension to nonlinear representations by means of autoencoders provided further improvement in accuracy compared to the classical linear representations from the Nelson-Siegel family or those obtained using principal component analysis (PCA).
- We propose a Q-measure model where future curve shapes follow such low dimensional representation derived from the historical data up to a convexity correction and examine its properties for both affine and autoencoder-based cases.
- We present evidence that our approach has notable advantages over the standard way of incorporating historical data into Q-measure models via factor correlation.
- Theoretical and practical benefits of the proposed model for risk and pricing applications are discussed.
Alexander Sokol:
Executive Chairman and Head of Quant Research, CompatibL
Alexander Sokol: Executive Chairman and Head of Quant Research, CompatibL
Alexander Sokol is the founder, Executive Chairman, and Head of Quant Research at CompatibL, a trading and risk technology company. He is also the co-founder of Numerix, where he served as CTO from 1996 to 2003, and the co-founder of Duality Group, where he served as CTO from 2017 to 2020.
Alexander won the Quant of the Year Award in 2018 together with Leif Andersen and Michael Pykhtin, for their joint work revealing the true scale of the settlement gap risk that remains even in the presence of initial margin. Alexander’s other notable research contributions include systemic wrong-way risk (with Michael Pykhtin, Risk Magazine), joint measure models, and the local price of risk (with John Hull and Alan White, Risk Magazine), and mean reversion skew (Risk Books, 2014).
Alexander earned his BA from the Moscow Institute of Physics and Technology at the age of 18, and a PhD from the L. D. Landau Institute for Theoretical Physics at the age of 22. He was the winner of the USSR Academy of Sciences Medal for Best Student Research of the Year in 1988.
12.30 – 13.30: Lunch
13.30 – 14.30: Notation as a tool of thought? LLMs, cognitive biases, and logical fallacies.
In 1979, Kenneth E. Iverson, the inventor of APL (which became the progenitor of an entire family of programming languages, such as A+, k, and q) delivered his ACM Turing Award Lecture. He emphasised the “importance of nomenclature, notation, and language as tools of thought.” Iverson, along with other computer scientists, such as Bjarne Stroustrup and Guido van Rossum continued to work on programming languages – specialized notations that eschewed the limitations of natural (human) languages, thus facilitating complex and precise computation – and advancing science. In recent years, there has been a reversal in computer science: natural language became the tool of choice; the widely adopted large language models (LLMs) use natural language as input and output and work on natural language-based intermediate representations. Nonetheless, there is a certain unease about natural languages, which is probably a legacy of “Newspeak” described in George Orwell’s “Ninety-Eighty Four.” Not only can natural languages be used as a tool of manipulation (as in Orwell’s dystopian novel); they are inherently vulnerable to cognitive biases and logical fallacies, which may be an unavoidable feature of human existence. In this talk we explore the implications of this vulnerability. We also examine the significant role that cognitive biases and logical fallacies play in finance and economics. We compare and contrast LLMs and other directions in AI, such as reinforcement learning, where the emphasis is on provable optimality, which is missing in LLMs. Finally, we discuss various approaches to fixing this vulnerability.
Paul Bilokon:
CEO, Thalesians, Visiting Professor, Imperial College
Paul Bilokon: CEO, Thalesians, Visiting Professor, Imperial College
Dr. Paul Bilokon is CEO and Founder of Thalesians Ltd and an expert in electronic and algorithmic trading across multiple asset classes, having helped build such businesses at Deutsche Bank and Citigroup. Before focussing on electronic trading, Paul worked on derivatives and has served in quantitative roles at Nomura, Lehman Brothers, and Morgan Stanley. Paul has been educated at Christ Church College, Oxford, and Imperial College. Apart from mathematical and computational finance, his academic interests include machine learning and mathematical logic.
14.30 – 15.30: Pathwise Methods and Generative Models for Pricing and Trading
The deep hedging framework as well as related deep trading setups have opened new horizons for solving hedging problems under a large variety of models and market conditions. In this setting, generative models and pathwise methods rooted in rough paths have proven to be a powerful tool from several perspectives. At the same time, any model – a traditional stochastic model or a market generator – is at best an approximation of market reality, prone to model-misspecification and estimation errors. In a data-driven setting, especially if sample sizes are limited by constraints, the latter issue becomes even more prevalent, which we demonstrate in examples. This raises the question, how to furnish a modelling setup (for deriving a strategy) with tools that can address the risk of the discrepancy between model and market reality, ideally in a way that is automatically built in the setting. A combination of classical and new tools yields insights into this matter.
Blanka Horvath:
Associate Professor in Mathematical and Computational Finance, University of Oxford
Blanka Horvath: Associate Professor in Mathematical and Computational Finance, University of Oxford and Researcher, The Alan Turing Institute
Blanka research interests are in the area of Stochastic Analysis and Mathematical Finance.
Including asymptotic and numerical methods for option pricing, smile asymptotics for local- and stochastic volatility models (the SABR model and fractional volatility models in particular), Laplace methods on Wiener space and heat kernel expansions.
Blanka completed her PhD in Financial Mathematics at ETHZürich with Josef Teichmann and Johannes Muhle-Karbe. She holds a Diploma in Mathematics from the University of Bonn and an MSc in Economics from the University of Hong Kong.
15.30 – 16.00: Afternoon Break and Networking Opportunities
16.00 – 17.00: Evaluating LLMs in Financial Tasks – Code Generation in Trading Strategies
In this paper, we perform a comprehensive evaluation of various Large Language Models (LLMs) for their efficacy in generating Python code specific to algorithmic trading strategies. Our study encompasses a broad spectrum of LLMs, including GPT-4-Turbo, Gemini-Pro, Mistral, Llama2, and Codellama, assessing their performance across a series of task-specific prompts designed to elicit precise code implementations for over various technical indicators commonly used in the financial trading sector.
A principal component of our methodology involves the creation of a detailed prompt structure that adapts to the unique capabilities of each LLM. For OpenAI’s Assistant AI, we leverage an intricate prompt design that integrates templated responses, zero-shot task- specific prompts, and prompt chaining to guide the models through a step-by-step reasoning process, ensuring the generation of executable and accurate Python code. This structured approach not only facilitates the model’s comprehension of the task at hand but also allows for the nuanced adaptation of prompts to cater to the distinct processing styles of different LLMs.
Our evaluation framework is grounded in a comparison against baseline results obtained from widely recognized libraries such as TALib, as well as a comprehensive Python implementation of the indicators. Through a meticulous process of parsing our code and constructing data frames that encapsulate function names, parameters, and documentation, we establish a foundational prompt that prompts LLMs to propose viable Python code implementations. This zero-shot task-specific approach is crucial in enabling the LLMs to methodically navigate through the tasks, thereby enhancing the accuracy and relevance of the generated code. The findings indicate that GPT-4-Turbo, Codellama-70B, and Gemini-Pro yield encouraging results relative to baseline computations, with GPT-4-Turbo achieving identical implementations to the baseline in certain instances.
Miquel Noguer Alonso:
Co – Founder and Chief Science Officer, Artificial Intelligence Finance Institute – AIFI
Miquel Noguer Alonso: Co – Founder and Chief Science Officer, Artificial Intelligence Finance Institute – AIFI
Miquel Noguer is a financial markets practitioner with more than 20 years of experience in asset management, he is currently Head of Development at Global AI ( Big Data Artificial Intelligence in Finance company ) and Head on Innovation and Technology at IEF.
He worked for UBS AG (Switzerland) as Executive Director.for the last 10 years. He worked as a Chief Investment Office and CIO for Andbank from 2000 to 2006.
He is professor of Big Data in Finace at ESADE and Adjunct Professor at Columbia University teaching Asset Allocation, Big Data in Finance and Fintech. He received an MBA and a Degree in business administration and economics in ESADE in 1993. In 2010 he earned a PhD in quantitative finance with a Summa Cum Laude distinction (UNED – Madrid Spain).