Monday 23 January 2017

FTI on binary trading

Forex Trend Indicator on Binary trading 


Read more ...

Friday 20 January 2017

Better Candles Indicator

This simple indicator provides additional options for customizing the look of your candlestick charts on MarketScope.

It supports separate outline colour, fill colour, hollow/solid fill for up and down candles. Line thickness and candle body width can also be configured. Set it up how you like it, and add it to you default chart template for all new charts. It can also be used for multi-time-frame charts, e.g. showing D1 candles super-imposed on lower time-frame chart.




It is available on the store.
Read more ...

Tuesday 17 January 2017

How to registration


Read more ...

GBP/USD prior to UK PM's big speech on BREXIT

Candle Profile showing buying/selling ahead of UK Prime Minister Theresa May's long awaited speech on BREXIT.

This view of the Candle Profile indicator uses the "real volume" (buying/selling volume) from FXCM to generate the profile (green = buying, red = selling, yellow = delta). You can see the selling in these last few candles, but price still went up. This is the big boys pushing the price higher to close the gap.

I expect it to go both directions during the news event (Theresa May will be speaking at 11.45am UK time).

Read more ...

Wednesday 11 January 2017

USD/JPY Candle Profile showing buying/selling during "news event"

Lot's of movement today on the USD/JPY before, during and after President-elect Trump's news conference.

Some great buying & selling showing up on the Candle Profile.


The Candle Profile really works well for this type of scenario. More information on the Candle Profile here.
Read more ...

Machine Learning for Trading (Part 4) - Decision Trees

Decision tree learning uses a decision tree as a predictive model which maps observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves).

It is one of the predictive modelling approaches used in statistics, data mining and machine learning.

Decision trees where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees.

Tools are available (for example scikit-learn Python library) to enable efficient generation of decision trees.

The trees are generally binary trees, i.e. each node has 2 branches. A "decision" at each node causes the search to proceed down either the left or right "branch", and so onto the next node, etc. until a "leaf" node is reached.

The decisions are basically tests about the data set. For example, suppose we collected data from various technical indicators: RSI, STOCHASTICS, MVA, etc. A tree might be constructed something like the diagram below...


The above example is a classification tree, i.e. the leaf nodes are things like buy/sell/flat. It's also possible to construct trees that result in a numerical value, e.g. the leaf nodes could be some numeric value, e.g. prediction of price move. These are sometimes called regression trees.

By the way, the above tree is just for illustration purposes.

The tree doesn't have to be balanced (i.e. no need to have equal number of nodes on both sub-branches), criteria can appear multiple times, not all criteria need to be used, etc. It's actually a very flexible approach, since the data can be very heterogeneous (numerical values, classes, yes/no, etc.)

The most efficient way to develop the tree will select criteria that provide the most information at each step. For example, suppose we were using a decision tree to guess a playing card; the first question should be "is the card red?" as this instantly splits the data in half.

Decision trees are prone to over-fitting, but this can be mitigated by using ensemble methods (e.g. several different trees and using majority vote).

The process of generating a decision tree is also potentially useful in itself. For example, we can throw a heap of technical indicator data at the tree generation process, and see which are the most important (i.e. the ones performed earlier). This information may be useful for further studies.
Read more ...

Tuesday 10 January 2017

Candle Pattern Indicator and Scanner for MarketScope 2.0

Read more ...

Machine Learning for Trading (Part 3) - k-Nearest Neighbours

A popular and simple machine learning algorithm is k-Nearest Neighbours (or k-NN for short).

The k-NN algorithm is a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space. The output depends on whether k-NN is used for classification or regression:
  • In k-NN classification, the output is a class membership. An object is classified by a majority vote of its neighbours, with the object being assigned to the class most common among its k nearest neighbours (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbour.
  • In k-NN regression, the output is the property value for the object. This value is the average of the values of its k nearest neighbours.

For example, suppose we have training data for some fruit with measurements of the size, colour, acidity and a label stating the type of fruit (orange, lemon, lime). We can use k-NN to "predict" the class of an unknown fruit based on its measurements of size, colour and acidity. Alternatively, we could "predict" the value of acidity based on the fruit's colour, size and type. The former case is classification, the latter is regression.

For time-series analysis, this is generally just treated as a regression in 1-dimension, i.e. predicting a value of price from the"k" nearest time samples. This works very well for smoothing the data, but unfortunately breaks down somewhat at the right-hand edge, since the k nearest samples will always be to the left of it. In fact, the result is somewhat similar to a simple moving average.

The following graph shows some sample time series data, with a k-NN (k = 3) and a simple moving average (length = 3) for comparison.


The 1 dimensional k-NN does a great job of smoothing the data, but is not particularly good for prediction of the future (or at least no better than a moving average).

Another approach is to use a multi-dimensional k-NN. For example, suppose we use a feature vector "v"of 10 dimensions (denoted as v[0] to v[9]). Now, let v[0] be the price at time t0, v[1] be the price at time t1, v[2] be the price at time t2, etc.

We can use the k-NN algorithm to find the k nearest neighbours in our training data to this target vector. In this sense, it is essentially operating as a pattern matching algorithm. The output could be something like the average price-move of our k nearest matches, or even a buy/sell classification.

However, there are a number of issues:
  • We need some kind of price invariance built into the system. For example, a particular pattern based around 1.100 should be matched with an identical pattern that happens to be based around 1.200 or 1.300, etc.
  • Very large data sets are needed, and unlike the 1-dimensional case where we only had to search the last k points, we have to search the whole training set for our matches. This presents challenges for storage and performance.

One way to achieve the price invariance can be built in by looking at % change. For example, simply re-stating the feature vector so that v[0] = 0, and v[1] = (price at time t1 - price at time t0) / (price at time t0), etc. Similarly, one could use "returns" or "log returns".

Tackling the large amount of data requires a number of innovations:
  • Reducing which data is used. For example, suppose we only care about price patterns that occur when the price is "over-bought" or "over-sold" then there is no need to store data for all the other cases.
  • Using data structures like k-d trees to store the data more efficiently can drastically reduce the search time.
Read more ...

Monday 9 January 2017

New Release: Candle Pattern Indicator and Scanner (now with alert notifications)

I've added alert notifications to the Candle Pattern Indicator and Candle Pattern Scanner.

When alerts are enabled, the indicator will cause an alert (pop-up alert and/or sound) whenever a new candle pattern is detected.

More information can be found here and here.
Read more ...

Friday 6 January 2017

New Release: Candle Pattern Indicator and Scanner

I've just uploaded the Candle Pattern Indicator and Scanner. These indicators are now available from the store.

The "indicator" detects and identifies popular candle patterns on the chart.

The "scanner" further enhances this capability by simultaneously analysing multiple instruments at 3 different time-frames. The indicator displays a �mini-chart� for each specified instrument and time-frame combination, showing the currently identified candle pattern (if any).

The following patterns are supported: Doji, Spinning Top, Hammer, Shooting Star, Inverted Hammer, Hanging Man, Marubozu, Bullish Engulfing, Bearish Engulfing, Piercing Line, Dark Cloud, Bullish Harami, Bearish Harami, Morning Star, Evening Star.

Candle Pattern Indicator
Candle Pattern Scanner

Both indicators are now available on the store.
Read more ...

Candle Pattern Scanner

The Candle Pattern Scanner is an �add-on� for the Candle Pattern Indicator.

The Candle Pattern indicator identifies several popular candle-stick patterns which are then high-lighted on the chart.

The Candle Pattern Scanner further enhances this capability by simultaneously analysing multiple instruments at 3 different time-frames. The indicator displays a �mini-chart� for each specified instrument and time-frame combination, showing the currently identified candle pattern (if any).

This indicator utilizes the Candle Pattern Indicator directly, and therefore uses the same mechanisms and formula to detect the candle patters. Please refer to the Candle Pattern Indicator for details of this operation.

The following patterns are supported: Doji, Spinning Top, Hammer, Shooting Star, Inverted Hammer, Hanging Man, Marubozu, Bullish Engulfing, Bearish Engulfing, Piercing Line, Dark Cloud, Bullish Harami, Bearish Harami, Morning Star, Evening Star.

Input Parameters








Screenshots





Educational Links


The following links provide some good background information on candle patterns:
NOTE: Links to third-party sites are provided for your convenience and for informational purposes only.

The indicator is available on the store.
Read more ...

Candle Pattern Indicator

The Candle Pattern indicator identifies several popular candle-stick patterns which are then high-lighted on the chart.

The following patterns are supported: Doji, Spinning Top, Hammer, Shooting Star, Inverted Hammer, Hanging Man, Marubozu, Bullish Engulfing, Bearish Engulfing, Piercing Line, Dark Cloud, Bullish Harami, Bearish Harami, Morning Star, Evening Star.

Many of the identified candle patterns are only valid in a certain trend direction, e.g. a �Shooting star� may only appear in an up-trend. The indicator provides several techniques which can be used to determine the local trend. This is using the �Trend filter� option.

Currently the following �trend filters� are supported:
  • MVA
    • Simple moving average. If the current price is above the moving average line then it is an up-trend; if the current price is below the moving average line then it is a down-trend.
  • DMI
    • Directional Movement Index. The DMI indicator provides 2 outputs: DI+ and DI-. If DI+ is greater than DI- then it is an up-trend; if DI+ is less than DI- then it is a down-trend.
  • RSI
    • Relative Strength Index. This oscillator is often used for over-bought / over-sold detection. However, because it is essentially counting the ratio of up-moves to down-moves, it is also quite good at determining local trend. If the RSI level is greater than 60 then it is an up-trend; if the RSI level is less than 40 then it is a down-trend.

The �trend filter� can also be turned off (which means patterns can be detected regardless of trend direction).

The indicator provides several options for displaying the identified candle patterns, including labels, tool-tips, shaded rectangle, etc.

A separate �scanner� is also available, which can simultaneously analyse multiple instruments at 3 different time-frames.

Input Parameters








Screenshots



Educational Links


The following links provide some good background information on candle patterns:
NOTE: Links to third-party sites are provided for your convenience and for informational purposes only.

The indicator is available on the store.
Read more ...

Wednesday 4 January 2017

Machine Learning for Trading (Part 2)

Machine learning is sometimes conflated with data mining, where the latter subfield focuses more on exploratory data analysis and is known as unsupervised learning. I like to think of Data Mining as looking for nuggets of gold (i.e. new information) in the depths of the Earth (i.e. the data).

Machine Learning is more directed, we know more or less what we are trying to achieve (e.g. a prediction or a classification) and we use the machine learning techniques to find a way to do it.

However, this distinction could also bee seen purely as nuance.

Let's first look at what we want to achieve. For example:
  • A prediction of future price (e.g. in the next 1 hour, EUR/USD will be 1.0500)
  • A prediction of future price direction (e.g. in the next 1 hour, EUR/USD will go up)

The first is a regression task, the latter is more of a classification task.

Of course, predictions don't have to be just about price. You could also predict volatility, daily range or a bunch of other variables. Classification tasks could include things like: will this be a trend day or a range day? Will the low of the day hold, or will there be a break-out?

Now, let's look at some ways we can start to tackle this. Machine Learning / Data Mining has many approaches, from simple linear regression, to decision trees, to neural networks and genetic algorithms.

I've looked at neural networks in the past. They are fantastically interesting tools, but also very hard to get right, computationally expensive, and opaque (i.e. you never really understand how it works). Genetic algorithms are great for optimisation and searching multi-dimensional spaces, but they are also computationally expensive and can be difficult to construct and use. I shall not be looking at either of these in this series.
Read more ...

Tuesday 3 January 2017

Machine Learning for Trading (Part 1)

Over the past few weeks / months, I've been looking at possibilities of using Machine Learning techniques for trading. This has mainly been research, reading, watching some stuff, messing around with Python, etc. Also, a few experiments on MarketScope. Let's call that Phase 1 - it wasn't very directed and nothing really concrete has come out of it, it was just meant to be a learning activity to discover some of the possibilities.

I'm now at a stage where I want to focus a little more, and put some more effort into this activity. With more direction, and some actual goals too. I'll try to cover as much as possible on the blog over the next few weeks and months.

What is Machine Learning?

"Machine learning is a method of data analysis that automates analytical model building. Using algorithms that iteratively learn from data, machine learning allows computers to find hidden insights without being explicitly programmed where to look"

Well, that's one definition at least. I guess the essence is that there is some kind of "engine" which will analyse the data (inputs) and produce / deduce some kind of result (output) without having to be explicitly programmed how to do it - the learning algorithm itself works out the "how" from the data.

Machine Learning is typically used for either "classification" or "regression". Probably most people are familiar with regression, such as linear regression, where a straight line is fitted to the observed data, and this line can be used to predict values that have not yet been observed. Classification is where the data is classified into 2 or more categories and the learning algorithm learns how to classify new points based on its learnt model. These categories (or classes) can be given (e.g. as labels in the training data) or even determined by the learning algorithm itself.

Machine Learning is used increasingly more in many tasks these days, everything from search engines to computer vision and speech recognition. I will of course be focusing on the application of trading.

Wiki page on Machine Learning.
Read more ...