For me, being an ethically responsible person is something I strive for. In my experience it manifests as a feeling of incompleteness if I don’t focus on it. Unlike other goals, like doing well in school and staying fit, attaining a level of ethical responsibility is less defined. It is hard for me to measure how ethical I am overall. In school you can look to grades, for running you can look at your times, for tennis you can see your results in the matches you play but my ethical performance lacks similar metrics. I’m not sure if it is even possible to quantify how ethical someone is. More often I find that you can tell when something is unethical. Certain actions give you an uneasy feeling and I usually rely on my intuitions to tell me if something is morally correct. Many times I find that my “reasoning” about the ethical ramifications of an action is really just a justification for my initial judgement.
I do think that most people are in agreement about most ethical issues. In almost every community around the world, regardless of culture, things like murder are universally deemed to be unethical. Interesting ethical cases, where reasonable people can disagree, challenge us to give more structure to our intuitions in order to justify our views. Personally, I like to think about things from both a utilitarian viewpoint and a Kantian perspective. The challenge is to find a theory that gels with my intuitions all of the time. When utilitarianism allows us to justify terrible actions for the greater good, that feels wrong. When Kantian thinking makes unreasonable demands on individuals to adhere to strict laws that also feels wrong. Here again it’s easier to tell when something is wrong, but very hard to determine rules for correctness.
In terms of computer science and technology, I am concerned that the systems we are currently building will not be equitable and fair. I was reading an article about new essay grading technology that attempts to use machine learning models to grade student submissions without instructor feedback. One ramification of this approach was that the models gave a weight to the range of vocabulary used in the essay. In the training set, essays that used more words tended to score higher. Students with a wider range of vocabulary tend to come from more privileged backgrounds. The algorithm created similar proxies for what made a good essay along various other lines. The end result was that the models did not look at the logical arguments laid out in the paper but instead discriminated dis-proportionally against students from lower socio-economic classes. The idea that an impartial computer system could negatively impact the futures of students based on factors out of their control is incredibly scary.
I think all computer scientists have a duty to understand how their work can impact others. If we don’t focus on building equitable and ethical systems now, we might cause societal issues that could take decades to resolve. We have an amazing opportunity to create systems that can amplify the ideas of others and give individuals social mobility that is unparalleled across most of human history. That comes with the danger of entrenching current societal problems in our computing systems, increasing the difficulty of every resolving them. I’m hopeful that we can do the former and not the latter.