This dissertation focuses on the ethics of artificial intelligence, specifically in relation to Lethal Autonomous Weapons Systems. I investigate which moral system of reasoning will be best suited to govern the conduct of LAWS in warfare. Specifically, I argue that virtue ethics is a solution to the problem of artificial moral reasoning in the case of LAWS. This is different from considering whether a virtue ethics system of moral reasoning would inform the environment in which LAWS are programmed. My focus is rather, whether, once it becomes possible to establish a system of codifiable moral reasoning, virtue ethics would be the best normative choice for such a system. My argument will be based on the following premises: the moral status of LAWS in warfare should be that of combatants and thus LAWS should have some level of moral agency; it is morally justifiable to deploy LAWS in warfare; a combination of both top down and bottom up approaches (a hybrid approach) could be the best programming approach for artificial moral reasoning in LAWS; and a hybrid approach, using Aristotle’s virtues ethics framework and coupled with defeasible reasoning, is the best solution at present for artificial moral reasoning in LAWS. These premises will be broken into chapters and explained in detail to illustrate how each of them fits into the above stated argument to culminate in the conclusion that virtue ethics is a solution to the problem of artificial moral reasoning in the case of LAWS.