Computer-aided diagnosis using deep learning approaches has made tremendous improvements in medical imaging for automatically detecting tumor area, tumor type, and grade. These advancements, however, have been limited due to the fact that 1) medical images are often less in quantity, leading to overfitting, and 2) significant inter-class similarity and intra-class variation between the images. To tackle these issues, we propose a Synergic Deep Learning model [Zhang et al., 2018, 2019] with an AlexNet [Krizhevsky et al., 2012] backbone for the automatic grading of glioma tumors. The Synergic Deep Learning architecture enables two pre-trained models to mutually learn from each other, allowing them to perform better than vanilla pre-trained models. Our study uses 417 T1-weighted sagittal tumor Magnetic Resonance Imaging (MRI) slices obtained from the REMBRANDT [Scarpace et al., 2019] dataset. These 417 slices, obtained from 20 patients, are pre-processed and augmented before they are fed into the model, which then classifies the tumor into one of the three grades: oligodendroglioma, anaplastic glioma, and glioblastoma multiforme. The proposed architecture achieves a training accuracy of 98.36% and a testing accuracy of 92.85%. Finally, the proposed SDL model with AlexNet backbone outperforms popular pre-trained models in terms of testing accuracy, recall, specificity, sensitivity, and F1 score.