The standard way to proceed is to make predictions from the entire tree (or whatever model you are using). You then evaluate the predicted values compared to the true values in terms of some statistic(s) of interest, such as mean squared error for the regression tree you are using.
This statistic of interest is then your measure of performance for the entire tree, not just of an individual node.
There are many concerns downstream of this (discussed in the comments), and this is why methods like cross validation and bootstrap validation exist. However, all of them will begin with sending your features down the decision tree to get a prediction for the entire tree, just as you would do for any other supervised learning model.