Financial market forecasting
Financial markets are chaotic systems that are difficult to predict. Algorithmic trading allows you to execute trading orders using pre-programmed strategies and is widely used by institutional investors. Machine learning is a promising approach to improve the performance of these trading strategies.
We are leveraging our open-source software FreqAI to explore the application of machine learning in algorithmic trading. We are currently working on the following projects:

QuickAdapter V4 vs V3

Adaptive modeling of financial market data using different strategies for algorithmic trading, comparing GPU to CPU performance

The objective of the present experiment is to compare the performance of the previous (V3) and new (V4) version of the QuickAdapter strategy, on top of testing a variety of hypotheses related to real-time adaptive modeling on a chaotic datasource. Among our tests, we explore the differences between the GPU and CPU algorithms in the XGBoost machine learning library.

Three FreqAI instances are configured to train separate regressor models for 19 cryptocurrency pairs (/USDT):
XGBoost GPU running QAV4
XGBoost GPU and CPU running QAV3

The cluster is actively generating 57 models (3 per coin x 19 coins) with 3.3k features per model models, and training new models every 5 min - 1 hour (depending on hardware and algorithm).
The XGBoost GPU variants are running on modern 16 core 3.9 GHz processors with 256 GB RAM, and A4500 GPUs. Meanwhile, the XGBoost CPU variant is running on recycled 12 core 2.8 GHz servers with 64 GB RAM (circa 2012). All performance metrics of the hardware are shown below.

DISCLAIMER FreqAI is not affiliated with any cryptocurrency offerings. FreqAI is, and always will be, a not-for-profit, open-source project. FreqAI does not have a crypto token, FreqAI does not sell signals, and FreqAI does not have a domain besides the freqtrade documentation https://www.freqtrade.io/en/latest/freqai/. Please beware of imposter projects, and help us by reporting them to the official FreqAI discord server.

Enable iframe to view the dashboard.

Completed experiments

The objective of the present experiment is to test a variety of hypotheses related to real-time adaptive modeling on a chaotic datasource. Among our tests, we include a comparison of the popular XGBoost and CatBoost machine learning libraries. Further, we explore the differences between the GPU and CPU algorithms in the respective libraries. Finally, we explore the effect of dimensionality reduction through PCA.

Five FreqAI instances are configured to train separate regressor models for 19 cryptocurrency pairs (/USDT):
CatBoost CPU and GPU: solely exchange-derived features;
CatBoost CPU PCA: exchange-derived features after PCA transform for dimensionality reduction;
XGBoost CPU and GPU: solely exchange-derived features.

The cluster is actively generating 95 models (5 per coin x 19 coins) with 3.3k features per model models, and training new models every 5 min - 2 hours (depending on hardware and algorithm).
The CatBoost and XGBoost GPU variants are each running on modern 16 core 3.9 GHz processors with 256 GB RAM, and A4500 GPUs. Meanwhile, the CatBoost and XGBoost CPU variants are running on recycled 12 core 2.8 GHz servers with 64 GB RAM (circa 2012). All performance metrics of the hardware are shown below.

Below are screenshots from the dashboard reporting live results during the experiment.

Using FreqAI and its partner software Freqtrade, we pitted three of the most popular open-source machine learning libraries - XGBoost, LightGBM, CatBoost - against each other to see which regressor performs better at predicting the cryptocurrency market, in real time.

Three FreqAI producer instances, one for each regressor, were configured to train separate models for 19 cryptocurrency pairs (/USDT). The instances were hosted on separate, identical, recycled servers (12 core Xeon X5660 2.8GHz, 64Gb DDR3). The accuracy of the predictions produced by each regressor was assessed via two accuracy metrics: the balanced accuracy (the arithmetic mean of sensitivity and specificity) and a custom accuracy score (the normalized temporal distance between a prediction and its closest target). A second set of servers (identical to the others) were hosting two consumer instances, one per accuracy metric, that aggregated the prediction outputs from the regressors and selected the one with the highest score.

Below are screenshots from the dashboard reporting live results during the experiment.