25mins read Transposed Convolutions is a revolutionary concept for applications like image segmentation, super-resolution, etc but sometimes it becomes a little trickier […]
17mins read In general, perplexity is a measurement of how well a probability model predicts a sample. In the context of Natural Language Processing, […]
21mins read Classification predictive modeling typically involves predicting a class label. Nevertheless, many machine learning algorithms are capable of predicting a probability […]
8mins read There are various metrics to evaluate a classification model: Accuracy, Precision, Recall F1-score, and AUC-ROC score. However, it is always […]
17mins read AUC (Area Under the Curve)-ROC(Receiver Characteristic Operator) curve helps us visualize how well our machine learning classifier is performing. Although […]
10mins read What do BERT, RoBERTa, ALBERT, SpanBERT, DistilBERT, SesameBERT, SemBERT, SciBERT, BioBERT, MobileBERT, TinyBERT and CamemBERT all have in common? And […]
10mins read In a traditional classification problem formulation, classes are mutually exclusive. In other words, under the condition of mutual exclusivity, each […]
9mins read In one of my projects, I was wondering why I get the exact same value for precision, recall, and the F1 score when using scikit-learn’s metrics. […]
30mins read Deep neural networks are trained using the stochastic gradient descent optimization algorithm. As part of the optimization algorithm, the error for […]