I think there's a fundamental moral issue about whether it's right for a machine to decide to kill a person. It's bad enough that people are deciding to kill people, but at least they have perhaps some moral argument that they're doing it to ultimately defend their families or prevent some greater evil.