cropped bannermashup2

By Rebecca McCauley Rench

"An equal application of law to every condition of man is fundamental."
--Thomas Jefferson to George Hay, 1807. ME 11:341

The United States of America was created on the principles of equal, inalienable rights which we believe are a defining feature of an advanced civilization and necessary for stability in our culture and government. While it took our nation over a century to recognize that these rights apply to individuals regardless of their gender, race, or any other superficial trait, we have continued to move towards a society where all individuals are equal in the eyes of the law.

The US Constitution was a statement of doctrine and we used that to define laws with the assumption that all men and women should have equal standing under the law. But today our scientists are on the verge of creating non-human sentience in the form of computer intelligence. Our founders did not foresee the possibility of non-human sentience and we will need to change the assumption that humans are the only sentient beings to be considered by our doctrine. Any sentient being should have the equality and fair treatment that we have deemed necessary for our society. We should expand our concepts of rights beyond the human condition and see that these inalienable rights must be universal, and defined by key characteristics inclusive but not exclusive to humanity. These key characteristics include a minimum level of intelligence, free will, ability to communicate, and self awareness. Under common law today the mentally disabled and people unable to make a conscious choice are not responsible for their actions. However, all conscious and able minded individuals are responsible under the law. Therefore, all sentient beings should be responsible for their actions and also afforded the liberties and rights of humans with the same intelligence abilities.

Many have proposed that new laws should be put in place to govern the treatment, liability, and rights of non-human beings that have artificial intelligence, yet our own history has proven that separate but equal does not work and is fundamentally incapable of holding all equal in the eyes of the law. As such, we propose a Declaration of Universal Rights to clearly provide all sentient beings with the same rights and privileges of human beings, regardless of the origination of their being. These rights should apply to all sentient beings.

In accordance with our commitment to equality, justice, and preservation of inalienable rights, the United States of America should view all sentient beings with equality in the eyes of the law.

We hold these truths to be self-evident, that all sentient beings are created equal, that they are endowed by their creator with certain inalienable rights, that among these are existence, freedom of thought, and the pursuit of purpose.

By Charles Mueller

We are blindly directing the evolution of this planet. Last week I came across an article discussing the discovery of a new bacterium that digests PET (polyethylene terephthalate) plastics. The authors of the article discovered the microbes by examining the trash we are continually filling up the world with. These microbes evolved on their own to digest plastic because we created an environment where the only thing they had to eat was PET plastic.

This is blindly directing evolution. We are responsible for this, yet at no point in time was this ever our intent. It is just a side effect of the choices we’ve blindly made as we continue to take more and more control over how the world evolves. At least with things like GMOs we are consciously making these choices on how certain organisms “evolve”. While in many ways this was a really cool discovery that could help us deal with things like trash pollution, it just makes me wonder what other kinds of ways life on this planet are evolving due to our visionless choices. It is likely that for every cool accident we create like this there is an equally bad one right around the corner.

There is a more intelligent way to do this that can help us create the good and prevent the bad. Technologies like CRISPR are enabling a future where we can open our eyes and begin to put more thought into how our choices will go on to impact the evolution of life on this planet. The discovery of this new bacterium reminds us that we are driving the car of evolution blindfolded with no steering wheel or brakes. While we debate whether or not we should use technologies like CRISPR to engineer ourselves for fear we might direct our evolution in damaging way, let’s remember we are already doing this with little to no control or knowledge of how it is currently playing out.

I get that directing evolution with intent sounds crazy, but the reality is it is crazier to do it blindly. The future of humanity, of life, is a future of design. Let’s make sure we acknowledge this and do our best to ensure that intelligent rather than blind choices are directing the future of evolution.

By Paul Syers
Nature misses the mark when examining the question of what problems future generations will face. The most recent issue of Nature, released this week, takes a break from its usual way of thinking and tackles the bold concept of looking far into the future. I was excited to dive into one particular article that started off asking how well we can predict the effect that our decisions today will affect future generations. Imagine my disappointment when the article turned out to be nothing more than a vehicle for discussion nuclear waste disposal. While that particular issue is important and will be for generations to come, I find it an incredibly limited focus for discussing the impact of our actions on future generations.

In some ways, saying we need to improve how we store nuclear waste makes the same assumptions that the beginning premise of the article warns against. With the types of capabilities we will soon acquire in the areas of genetic manipulation, neurotechnology, and machine learning, our civilization and even species could look very different in as little as the next three generations. Nuclear waste is just one small piece in a very large, complicated puzzle, and it’s likely we don’t even have all the pieces yet. If you’re going to ask that question, why not tackle it head on and acknowledge this?

I do agree that it would be more useful when thinking about the future to separate the discussion into close future generations and remote future generations. With close future generations, we have a reasonable idea of possible directions things can go. When considering remote future generations, it is incredibly naïve to think of them as living like us, operating with remotely the same technology as us, or even existing as a single species, like we currently do. The likelihood that remote future generations solve the problem of nuclear waste contamination could be just as high as the likelihood that those theoretical generations are even civilized, or limited to this planet. The decisions we make about CRISPR technology, about the protocols we build into learning machines, and about the lengths we will go to prevent nuclear material from falling into the wrong hands will determine what things will be like for future generations far more. I’d like to hear the thoughts the world’s greatest minds have on that. We bury our waste in the ground, we shouldn’t bury our heads there too.

by Paul Syers

Not every new technology requires new regulations to govern its use. When examining problems created by innovation, we should look to the jurisdiction and applicability of existing laws before trying to write new ones that will further complicate the whole system. Writing knee jerk laws to regulate the size of airline seats, for example, is not good practice. It only gunks up the works and slows both functioning of the government and the pace of innovation down.

I’m not saying that all regulation is bad. We are all demonstrably safer because of the seatbelt regulations for cars enacted in 1967. I don’t want to return to the world of Upton Sinclair’s the Jungle.

New innovations will always bring about new situations that raise questions about liability and personal freedoms. But rarely do these new situations differ so much that new wholly new legislation is required. The coming time of autonomous vehicles means that eventually someone will get into a wreck with one. Last I checked, however, the AV program is not a sentient being, and therefore it has both a maker and owner, and there is more than enough software liability precedent to cover things. In the fight between the FBI and Apple, some have suggested that the All Writs Act is antiquated, because it was created in 1789, yet no one questions any part of the Bill of Rights, which was enacted the same year. The Director of the FBI stated in front of Congress that he is confident the courts can make a ruling in this particular case.

So let’s stop itching to write new laws and spend more time understanding and adapting the laws we do have to work better.

By Paul Syers

In most Sci-Fi movies, artificial intelligence comes in the form of a scary, emotionless entity that intends to destroy humanity. A small number of movies, however, such as D.A.R.Y.L., Her, and arguably even Toy Story and Ted 2 (they’re certainly intelligent, non-human beings, even though there is no explanation as to how they’re intelligent) have A.I. characters as endearing protagonists. What makes the audience like these particular A.I.s is that they express emotions: they can feel. Expressing complex emotions is something we use to define the human experience, but what if it’s more than the key to humanity, what if it’s the key to human level intelligence?

Emotions stem from the innate drive for self-preservation, but the complex range emotions that we see in humanity is so much more than that. A being with our intelligence but without our emotions is a major cause of fear towards A.I. Without emotions, the more intelligent being would either enslave or eradicate humanity, or so goes the argument. Would the ability to empathize, sympathize, even hope and regret not be able to prevent such actions?

Some groups are working on programs that can recognize human emotions (Apple just bought a company), which is a good start. In fact, such efforts highlight how little understanding we currently possess regarding human emotion. We need to do more on both fronts. We need to put more efforts into harnessing neurotechnology to deepen our understanding of human emotion, while at the same time dive into creating programs that actually produce emotions.

With a greater understanding of emotions, intelligence, and the relationship between the two we can not only create new beings that both think and feel, but we can better control and govern ourselves. In the process, we will hopefully see that the fundamental rights we hold so dear should extend to all beings with our level of intelligence or above. That's how we create a future that looks like A.I.: Artificial Intelligence and avoid one that looks like Terminator: Salvation.