View on GitHub

reading-notes

My learning journal for Code Fellows

Ethics in Tech

Code of Ethics

The path to tech is relatively open to all comers at this point in history. This is an amazing thing, but because of the relatively low bar of entry, a new fact has arose: There is no hippocratic oath necessary to become a coder. It is an amazing thing to start a journey into tech, as you are given tools that could very easily shape the future. With that power however, one should realize, understand, and be expected to adhere to a set of ethics while creating this potentially world changing code. Only create code that operates with people’s privacy in mind, for example. Privacy being one of the newest on a long line of endangered resources, it is our duty as tech professionals to protect that limited resource while it is still present.

Project Dragonfly

Reading about this internal Google project is equal parts saddening and terrifying. Essentially, Google was caught having created an app based search engine (dubbed Dragonfly) for the Chinese market that would censor results of searches. Namely hiding results pertaining to human rights, peaceful protests, democracy and other topics had chosen by the Chinese state to be blacklisted. While the censoring of information is nothing new, especially in authoritative regimes, it is wildly sad that a company such as Google would go out of their way to build an app for such purchases. The open flow of information is one of the most things that humans value, and this isn’t a new concept. Look no further than the fact that it is still a federal offense to tamper with mail, going as far as it to be a federal offense previously to even delay the shipment of the mail. The one silver lining of this story rife with failures of character is the fact that the reason that this is even a story. Over 1,400 Google employees realized that something was amiss and sought to bring those injustices to light, so that we might all be more vigilant. These are the ethics that should be striven for, to not only avoid unethical behavior, but call it out so something can be done about it when you see it.

Self Driving Car Ethics

The trolley problem has become a real-life and very concrete thing as self-driving cars are beginning their journey to ubiquity. The trolley problem being the hypothetical of given the choice would someone allow a trolley to drive over and kill multiple people that are on the tracks, or would they pull the lever which would then killing only one person? Macabre fro sure, but when self driving cars are being programmed, these decisions need to be dictated by code written by a human. The one thing that stands out to me in this article is the wildly arrogant tone that some of the proponents of the self-driving car have when talking about such situations. Citing mainly that the system will have multiple redundancies to check that situations won’t arise where such dire decisions need to be made, (one Manuela Papadopol even goes as far to say “I don’t remember when I took my driver’s license test that this was one of the questions,” when asked about what a car would do in such a situation similar to a trolley problem, nearly entirely missing the point) most auto makers say that the issue is simply a non-issue. This in my opinion is both wildly inaccurate, and a wildly cavalier attitude to take when literal lives could be on the line.

AI at Google

AI is a tricky thing to try and control. There are so many instances where companies have released a public facing AI chatbot only for it to go horribly wrong horribly fast. The fact that Google seems to have a strong moral compass when it comes to AI is great, I think that the rules they setup in their manifesto are wholly good. The rub comes in the application of those morals. If it is a machine learning system, it is really only as just as what it interacts with, coupled with how string it’s ethical programming is. We know that, at least in the consumer facing side of things, the data will drive a chatbot to say morally reprehensible things quicker than should be normal. So that then leaves the moral programming. Something that quite frankly leaves me at best worrisome when considering that this is the same company responsible for project dragonfly.