World Business StrategiesServing the Global Financial Community since 2000

AI and Advanced Quant Analytics: Live & Online from Schonfeld

10.30 – 11.30: Decoding the Auto Encoder – A deep dive into how auto-encoder neural networks can effectively capture the dynamics of swap yield curves across multiple currencies.

Abstract We fit auto-encoder neural networks to swap yield curves in 10 different currencies and over a period of 15 years. We show that the different yield curve shapes can be captured well with only two to three factors. The same neural network can be used across all currencies. We further show that the resulting yield curves are close to dynamically arbitrage free and we identify the stochastic process of the short rate under the risk neutral measure that is consistent with the neural network.

Jesper Andreasen: 

Head of Quantitative Analytics, Verition Fund Management LLC

11.30 – 12.30: Autoencoder-Based Risk-Neutral Model for Interest Rates

A new risk-neutral pricing model using autoencoders for a more accurate representation of yield curves.

Alexander Sokol (joint work with Andrei Lyashenko and Fabio Mercurio)

  • It is well known that historical yield curve shapes can be accurately represented using very few latent variables.
  • The recent extension to nonlinear representations by means of autoencoders provided further improvement in accuracy compared to the classical linear representations from the Nelson-Siegel family or those obtained using principal component analysis (PCA).
  • We propose a Q-measure model where future curve shapes follow such low dimensional representation derived from the historical data up to a convexity correction and examine its properties for both affine and autoencoder-based cases.
  • We present evidence that our approach has notable advantages over the standard way of incorporating historical data into Q-measure models via factor correlation.
  • Theoretical and practical benefits of the proposed model for risk and pricing applications are discussed.

Alexander Sokol:

Executive Chairman and Head of Quant Research, CompatibL

12.30 – 13.30: Lunch

13.30 – 14.30: Notation as a tool of thought? LLMs, cognitive biases, and logical fallacies.

In 1979, Kenneth E. Iverson, the inventor of APL (which became the progenitor of an entire family of programming languages, such as A+, k, and q) delivered his ACM Turing Award Lecture. He emphasised the “importance of nomenclature, notation, and language as tools of thought.” Iverson, along with other computer scientists, such as Bjarne Stroustrup and Guido van Rossum continued to work on programming languages – specialized notations that eschewed the limitations of natural (human) languages, thus facilitating complex and precise computation – and advancing science. In recent years, there has been a reversal in computer science: natural language became the tool of choice; the widely adopted large language models (LLMs) use natural language as input and output and work on natural language-based intermediate representations. Nonetheless, there is a certain unease about natural languages, which is probably a legacy of “Newspeak” described in George Orwell’s “Ninety-Eighty Four.” Not only can natural languages be used as a tool of manipulation (as in Orwell’s dystopian novel); they are inherently vulnerable to cognitive biases and logical fallacies, which may be an unavoidable feature of human existence. In this talk we explore the implications of this vulnerability. We also examine the significant role that cognitive biases and logical fallacies play in finance and economics. We compare and contrast LLMs and other directions in AI, such as reinforcement learning, where the emphasis is on provable optimality, which is missing in LLMs. Finally, we discuss various approaches to fixing this vulnerability.

Paul Bilokon:

CEO, Thalesians, Visiting Professor, Imperial College

14.30 – 15.30: Pathwise Methods and Generative Models for Pricing and Trading

The deep hedging framework as well as related deep trading setups have opened new horizons for solving hedging problems under a large variety of models and market conditions. In this setting, generative models and pathwise methods rooted in rough paths have proven to be a powerful tool from several perspectives. At the same time, any model – a traditional stochastic model or a market generator – is at best an approximation of market reality, prone to model-misspecification and estimation errors. In a data-driven setting, especially if sample sizes are limited by constraints, the latter issue becomes even more prevalent, which we demonstrate in examples. This raises the question, how to furnish a modelling setup (for deriving a strategy) with tools that can address the risk of the discrepancy between model and market reality, ideally in a way that is automatically built in the setting. A combination of classical and new tools yields insights into this matter.

Blanka Horvath:

Associate Professor in Mathematical and Computational Finance, University of Oxford

15.30 – 16.00: Afternoon Break and Networking Opportunities

16.00 – 17.00: Evaluating LLMs in Financial Tasks – Code Generation in Trading Strategies

In this paper, we perform a comprehensive evaluation of various Large Language Models (LLMs) for their efficacy in generating Python code specific to algorithmic trading strategies. Our study encompasses a broad spectrum of LLMs, including GPT-4-Turbo, Gemini-Pro, Mistral, Llama2, and Codellama, assessing their performance across a series of task-specific prompts designed to elicit precise code implementations for over various technical indicators commonly used in the financial trading sector.

A principal component of our methodology involves the creation of a detailed prompt structure that adapts to the unique capabilities of each LLM. For OpenAI’s Assistant AI, we leverage an intricate prompt design that integrates templated responses, zero-shot task- specific prompts, and prompt chaining to guide the models through a step-by-step reasoning process, ensuring the generation of executable and accurate Python code. This structured approach not only facilitates the model’s comprehension of the task at hand but also allows for the nuanced adaptation of prompts to cater to the distinct processing styles of different LLMs.

Our evaluation framework is grounded in a comparison against baseline results obtained from widely recognized libraries such as TALib, as well as a comprehensive Python implementation of the indicators. Through a meticulous process of parsing our code and constructing data frames that encapsulate function names, parameters, and documentation, we establish a foundational prompt that prompts LLMs to propose viable Python code implementations. This zero-shot task-specific approach is crucial in enabling the LLMs to methodically navigate through the tasks, thereby enhancing the accuracy and relevance of the generated code. The findings indicate that GPT-4-Turbo, Codellama-70B, and Gemini-Pro yield encouraging results relative to baseline computations, with GPT-4-Turbo achieving identical implementations to the baseline in certain instances.

Miquel Noguer Alonso:

Co – Founder and Chief Science Officer, Artificial Intelligence Finance Institute – AIFI

17.00 – 19.00: Roof Terrace Drink Reception.

Event Email Reminder

Error