Social media platforms allow billions of individuals to share their thoughts, likes and dislikes in real-time, without any censorship. This freedom, however, comes at a cyber-security risk. Cyber threats are more difficult to detect in a cyber world where anonymity and false identities are ever-present. The speed at which these deceptive identities evolve calls for solutions to detect identity deception. Cyber-security threats caused by humans on social media platforms are widespread and warrant attention. This research posits a solution towards the intelligent detection of deceptive identities contrived by human individuals on social media platforms (SMPs). Firstly, this research evaluates machine learning models by using attributes such as the “profile image” found on SMPs. To improve on the results delivered by these models, past research findings from the field of psychology, such as that humans lie about their gender, are used. Newly engineered features such as “gender-derived-from-the-profile-image” are evaluated to grasp whether these features detect deception with greater accuracy. Furthermore, research results from detecting non-human (also known as bot) accounts are also leveraged to improve on the initial results. These machine learning results are lastly applied to a proposed model for the intelligent detection and interpretation of identity deception on SMPs. This paper shows that the cyber-security threat of identity deception can potentially be minimized, should the vulnerability in the current way of setting up user accounts on SMPs be re-engineered in the future.