The objective of the present experiment is to compare the performance of the previous (V3) and new (V4) version of the QuickAdapter strategy, on top of testing a variety of hypotheses related to real-time adaptive modeling on a chaotic datasource. Among our tests, we explore the differences between the GPU and CPU algorithms in the XGBoost machine learning library.
Three FreqAI instances are configured to train separate regressor models for 19 cryptocurrency pairs (/USDT):
XGBoost GPU running QAV4
XGBoost GPU and CPU running QAV3
The cluster is actively generating 57 models (3 per coin x 19 coins) with 3.3k features per model models, and training new models every 5 min - 1 hour (depending on hardware and algorithm).
The XGBoost GPU variants are running on modern 16 core 3.9 GHz processors with 256 GB RAM, and A4500 GPUs. Meanwhile, the XGBoost CPU variant is running on recycled 12 core 2.8 GHz servers with 64 GB RAM (circa 2012). All performance metrics of the hardware are shown below.
Enable iframe to view the dashboard.