Edward O. Wilson and his diagnosis of humanity in a tweet | 3,500 Million | future planet

admin786786 January 20, 2022 43 Views
Updated 2022/01/20 at 4:35 AM
7 Min Read
Edward O. Wilson and his diagnosis of humanity in a tweet |  3,500 Million |  future planet
Edward O. Wilson and his diagnosis of humanity in a tweet | 3,500 Million | future planet Listen to this article

On December 26, Edward O. Wilson, biologist, writer and naturalist, who was called the Darwin of the 21st century, died. His work was key to understanding how evolution explains animal behavior. He began by studying the social systems of ants and ended up applying them to humans, with conclusions that were not always liked by both sides of the political spectrum, unaccustomed to being told that genetics has something to do with the way we act. Today it is more than assumed. It also contributed to developing the theory of biodiversity and to everyone knowing today what the term means.

He didn’t just focus on biology. He was a humanist, author of the book Consilience, which tried to unite the knowledge of different branches of science, and aspired to do so with the humanities as well. At the end of an interview in 2009 for Harvard Magazine, he made the most accurate diagnosis of humanity ever made in less than 140 characters: “We have Paleolithic emotions, medieval institutions, and godlike technology. And that’s terribly dangerous.” This phrase goes a long way. Here is an attempt to explain it.

When all our efforts should be devoted to combating climate change, the number of self-made risks that can end in technological disasters has multiplied. They haven’t happened yet, but they can mislead us from the goal we should be focusing on: the decarbonization of our lives. Let’s see how the problems that Wilson mentioned are related, starting with some examples –there could be many more– about that technology with divine capacities.

We have Paleolithic emotions, medieval institutions and technology fit for a god. And that’s terribly dangerous

artificial intelligence, dangerous in itself for general uses, becomes terrifying for military use. It is not something for the future: it is already possible to buy autonomous drones with the ability to decide when and who they kill. has used them Turkey in Libya, with proven efficacy.

Another less reassuring possibility is that of future pandemics with synthetic microbes. This has been possible for several years, when a type of experiment called increased functionality (“gain of function”). It was shown that a deadly but poorly transmissible strain of bird flu could be converted into a highly transmissible one.

The gene editing with CRISPR opens the door to two dangers of unknown consequences. The first is human gene editing of heritable traits. That is, changes that are made in the genome that will be passed on to the offspring, unlike genetic treatments such as thalassemia. It was already made in China, with the birth of twin girls in 2018, in a case that caused a scandal in the scientific community for skipping all ethical standards. The second danger is the suppression of species by introducing genes that block reproduction, which may be beneficial if diseases such as malaria, transmitted by mosquitoes, but whose ecological consequences are not known, are eliminated.

Other technologies can affect the livelihoods of people in poor countries, such as synthetic coffee, whose adoption could end the work of 125 million people who are dedicated to its cultivation, especially in developing countries.

The same can happen with laboratory meat, which could have beneficial effects on climate change, by reducing the number of heads of ruminant cattle that emit methane, but would leave tens of millions of families who live from livestock without income. Another case of progress can be even more serious. Industrial automation may deny less advanced countries the possibility of abandoning the primary sector (agriculture, fishing, mining), which is less paid and has more unstable prices than the industrial sector or services. In this article, Jeffrey Sachs explains how increased productivity thanks to robotics can harm the exports of the less technically advanced.

But the most dangerous of the technologies is the one related to the emotions of the Paleolithic that Wilson mentioned as the second leg of the problem. The extreme right has understood better than the left how to use social networks to manipulate emotions, bringing out the worst in human beings. Artificial intelligence, which controls the algorithms that decide what we see on Facebook or Twitter, amplifies that domain. If we add to that the Deep Fakes, edited videos that can show famous people saying whatever the manipulator on duty wants them to say, the picture is worrying. When it would be most necessary for us to understand the risks that lie ahead and start remedying them, the number of anti-vaccines, magufos and deniers of all stripes who feed on WhatsApp to increase the political weight of stupidity is growing.

Wilson’s third problem concerns the inability to address these risks with current institutions. Any of the aforementioned risks should be controlled by agile and effective organizations, capable of putting the scientific community in agreement with multilateral institutions to stop abuses or irresponsibility. It is difficult, because the rush to appear on the covers of scientific journals, to obtain the most profitable patents or to have an advantage in the creation of new start-up or lethal weapons are not incentives for prudence.

In short, we have a technology capable of doing wonders, but if used badly it will have serious secondary effects; a sector of the population stirred up by WhatsApp by the extreme right –and sometimes by the extreme left–, ready to oppose the necessary solutions to face problems ranging from the climate catastrophe to the pandemic and, finally, some institutions that they are not at the level of the challenges they have to face. Let nothing happen to us!

Share this Article