Hi reader
Welcome to another edition of the Trend Prophets Academy Newsletter!
In this edition, we look at:
Since we know your time is valuable, we summarize each section with its key points on top so that you can get all the important information in less than 2 minutes. Those that want to learn more and see the graphs and visuals, can continue reading further down.
The summary:
Why is this important?
See below to read the full article.
Did you know that in the past 5 years, NASDAQ produced over 100 different signals on when to buy and sell? We did. That’s how we beat QQQ by over %125 in 5 years. Subscribe to Trend Prophets today.
In our last newsletter, we explored key considerations when evaluating AI-driven investment solutions and highlighted some practical use cases. We were overwhelmed by the feedback from many subscribers who posed similar questions and found value in gaining a clearer understanding of whether AI and machine learning can effectively predict asset prices. In short, the answer is: it depends.
In this week’s article, we delve deeper into various types of machine learning models, aiming to offer advice and clarity on the challenges and potential unreliability.
The first thing to grasp about machine learning (ML) is that there are approximately 10 widely used techniques (and their variants) in data science applications. Each technique involves a trade-off between complexity (interpretability) and flexibility.
Let’s consider predicting tomorrow’s weather based on atmospheric conditions. One approach could predict the exact temperature (a continuous quantity), while another might predict whether it will be hotter or colder compared to today (a categorical outcome: up or down). This prediction, known as the target variable, relies on specific features (atmospheric conditions).
Now, when selecting a model for this task, each model has its own balance between complexity (interpretability) and flexibility. Ultimately, the critical concept here is overfitting. If there’s one takeaway about machine learning models, it’s that as complexity increases, so does the risk of overfitting. We’ll revisit this shortly.
Let’s begin with linear regression. This method, around for centuries, assumes a linear relationship between the target and features. However, it imposes a linear relationship even if the actual relationship is nonlinear. This approach sacrifices flexibility but provides a clear understanding of how each feature influences predictions. Using our weather example, linear regression would precisely show how each atmospheric condition affects tomorrow’s temperature, along with an explanation of the prediction rationale. Thus, this model offers high interpretability (low complexity) but limited flexibility due to its inability to accommodate all nuances in feature-target relationships.
Now, consider more complex models like decision trees or random forests. These models handle non-linearity better by not imposing strict linear relationships. Instead, they create decision rules based on feature observations to make predictions. In our weather analogy, this would involve a series of “if” statements based on various atmospheric conditions to form a predictive model. These models offer greater flexibility, but this comes at the cost of reduced interpretability. Consequently, understanding why a model makes a specific prediction and which features are most influential becomes more challenging.
Lastly, there are deep learning models, the most flexible and complex techniques available. These models mimic how the human brain processes information through layers of interconnected neurons. While deep learning excels with large datasets and complex patterns, it often lacks interpretability, operating more like a “black box.”
As model complexity increases and interpretability decreases, the risk of overfitting rises. Overfitting occurs when a model learns noise or random patterns from training data, impairing its ability to generalize to new, unseen data—a common pitfall in machine learning practice.
Therefore, when constructing a machine learning model, it’s crucial to consider these trade-offs and experiment with various models and techniques to find the most suitable one for your specific case.
Despite some models lacking interpretability, there are techniques available to understand model behavior and identify influential features in predictions. This capability enhances our ability to analyze models broadly or on an individual prediction basis, yielding valuable insights.
This illustrates the power of machine learning, with ongoing advancements continuously enhancing our toolkit and improving model quality.
I hope this article sheds light on how machine learning models function and the thought process involved in approaching data science projects.
One challenge with machine learning models in predicting asset prices is the low signal-to-noise ratio in price series. This requires filtering to extract meaningful signals—an area where Trend Prophets excels! We specialize in isolating trends and signals in asset prices, guiding decisions on when to invest in various ETFs and when to stay in cash. Our focus is on downside protection, capturing upside potential while safeguarding against significant market downturns that erode long-term wealth. This represents passive investing 2.0, and our results speak for themselves. Contact us today to learn more and subscribe here.
Your future self will thank you for it!
Table 2: Trend Prophets core strategy performance (2010-03 to 2024-06).
Don’t wait. Subscribe today and see what we can do for you.
That’s it for this edition of the Trend Prophets newsletter! Please contact us at info@trendpophets.com for any questions.
Cordell L. Tanny, CFA, FRM, FDP
President & Founder
Disclaimers: Past performance is no guarantee of future results. This newsletter should not be considered as investment advice and is intended for information purposes only. Please see our Terms and Conditions for all disclaimers.