There are numerous other metrics to assess the performance of a prediction model apart from accuracy. This is especially important when it comes to imbalanced dataset like activism campaigns.
For example, let’s say there is a fraud transaction detection model. 1 among 10000 cases is the fraud. If we simply make a model that predicts a transaction as “not a fraud”, it will boast the accuracy as high as 99.99%. However, this is obviously a useless model.
In order to resolve this issue, we need to introduce some other metrics to better assess the performance.
Everything starts from TP, TN, FP, and FN. We need to digest all these concepts before delving into those metrics.

Notes:

$$ Accuracy = \frac{True Positive + True Negative}{Total} $$
Accuracy: Among all the data, how many did the model predict correctly?
Note: