This work focuses on the use of electroencephalogram (EEG) signals to classify four human emotions, i.e., amused, disgust, sad, and scared that are elicited by custom-made video clips. The proposed model uses the independent component analysis (ICA) for artifact removal, band power and Hjorth parameters for feature extraction, and neighborhood component analysis (NCA) and minimum redundancy maximum relevance (mRMR) for feature selection. These computational techniques are combined because when individually used, they tend to give better accuracy results. However, they are not jointly used in many EEG-based emotion studies. A comparison has been made on the results obtained from six machine learning models, namely, decision trees, support vector machines, k-nearest neighbors, naive Bayes, random forest, and long short-term memory (LSTM) recurrent neural network (RNN). The highest accuracy attained in this study is 99.1% that used long short-term memory recurrent neural network as a machine learning model, a combined NCA and mRMR for feature selection, and a combined band power and Hjorth parameters for feature extraction.