top of page
  • Foto del escritorRed de Servidores Públicos

The Ethical Machine: Ethical Aspects of the Regulation of Artificial Intelligence.

Nicolás H. Varela

“It was the best of times, it was the worst of times,

it was the age of wisdom, it was the age of foolishness,

it was the epoch of belief, it was the epoch of incredulity,

it was the season of light, it was the season of darkness,

it was the spring of hope, it was the winter of despair”

Charles Dickens


This essay identifies key ethical aspects about the regulation of systems that involve artificial intelligence. In this sense, we will explore some of the risks generated by the lack of adequate regulation, as well as some of the recommendations of experts in the field, and finally some of the international intents that are already taking place to regulate artificial intelligence to identify some of the essential principles that should be taken into account when trying to regulate this technology.


In recent years, a new technology with an unparalleled potential to transform most aspects of ordinary life has been developing: Artificial Intelligence (hereinafter AI). AI differs from any other technology in the sense that it is capable of self-learning, making predictions, autonomous decision-making, and emulating human cognitive abilities. Such technology implies great benefits and opportunities for humanity, but also great risks. The danger of developing these new technologies without a broad debate about the ethical aspect of AI to make sure its use is safe could result in a dystopian future as a large amount of literature and films have already forecasted.

Even though various experts, countries, companies, and international organizations have already taken the first steps trying to provide a series of recommendations to regulate this technology, it is not exaggerated to say that we are still at the beginning of this new era which is considered by various authors as the Fourth Industrial Revolution (PWC, 2018), where technology is merging the physical, digital and biological world (Parasso, 2016) thus transforming the very notion of what it means to be a human being, and probably there is still a long way to go to achieve the development that this technology will have in the future and we must prepare for that scenario.

AI is disrupting all kinds of industries and services, from health service for example with intelligent disease detection, security with the use of facial recognition, transportation with the development of autonomous cars, to many more uses to solve some of the most pressing causes for humanity such as the fight against climate change, poverty, corruption, among many others. However, technology is also creating new challenges never seen before that require a broad debate about possible solutions to face these new dangers responsibly.

According to the theory of technological determinism, which understands technology as a key component to promoting social changes, we can argue that technology transforms and is transformed by society (Smith & Marx, Merrit Roe & Leo, 1994). Societies change in relation to the technology available to them, and in the same way that other technologies in the past such as the steam engine or the electric light bulb changed the way humans lived, AI could even have a more transformative impact on society which is why it is necessary to have a regulation up-do-date with the times we are living to ensure that the changes generated by AI are positive and protect Human Rights.

The problem of ethics in AI

Defining what makes a person ethical is not an easy task, but broadly speaking, we could consider that a person is ethical when they act understanding the consequences of their actions and deciding to behave in a morally acceptable manner. This implies that the ethical dilemma requires a process of rational analysis and the capability to make decisions. For this reason, and at least from a legal perspective, a minor or an insane person for example could be found not guilty in a criminal case as they might not have responsibility for the acts they committed, not having a clear notion of the consequences of their actions.

However, transferring this general principle to the decisions made by an AI is an even more complex task. Philosopher John Searle classified AI into two different types: weak AI and strong AI. Weak AI systems are those that focus on specific tasks automatically, based on the parameters under which it was programmed. On the other hand, a strong AI is capable of imitating human intelligence in a general way, implying the capacity for abstraction, reflection, creative desire and improvisation that allows it to make its own decisions (Searle, 1980). Although today a strong AI is still a matter of science fiction, the constant developments in the field could mean that one day AI could achieve the ability to imitate human intelligence and even surpass it, which is known today as the singularity, the moment in which AI exceeds our capabilities and becomes a superintelligence (Vinge, 1993).

We could consider weak AI systems to be amoral, since they only operate according to how they were programmed without having an “opinion” or will of their own. In these cases, the only ethical aspect that we could analyse is the intention of the human beings in how to use that technology. However, and despite the fact that we still do not have a strong AI, there are currently systems with certain type of autonomy that would allow us to question whether the system is ethical as it is able to make its own decisions.

To illustrate this point, we could analyse the case of autonomous cars, which can be programmed so that, in the event of an accident, they can choose whether to save the life of one pedestrian or another. A decision that the human being may not have the speed to analyse in the fraction of a second in which an accident takes place, but a machine with a faster processor could. In this way, new questions emerge: is it more important that the car decides to save the life of a child or that of an elderly person if is forced to make the decision? A woman or a man? Is it better to protect the lives of passengers so that those who buy the car feels safer, or that of pedestrians to prevent the company from receiving lawsuits? These are some of the questions that "The moral machine[1]” tries to answer, collecting human opinions on the ethics of autonomous cars on a virtual platform. In addition to these, other dilemmas arise, such as if the driver, the company that built the car, the programmer who wrote the code, or a combination of all might be responsible in the case of an accident involving an autonomous car.

New risks generated by AI

As we have seen, new technologies imply new risks and the need for answers to questions that we have never asked ourselves. As the Uruguayan poet Mario Benedetti said: “when we thought we had all the answers, suddenly all the questions changed”.

The risk of these changes led various experts and recognized figures such as Elon Musk, Noam Chomsky, Stephen Hawking, Peter Norving (Google research director) or Mustafa Suleyman (founder of DeepMind Technologies) among many others[2], to warned about the risks of an immoral development of AI and its potential danger to humanity.

As systems become more sophisticated, the internal processes that machines carry out to make their decisions become increasingly complex. This can make it difficult for the human being to understand how the AI has made its decision. Besides this, there is also the problem that usually the algorithms are part of the private property of the company that designed them, so they are secret to the public.

An example of this problem can be found in the United States, where several States use an algorithm called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) to advise the judge in determining whether or not to grant parole to a prisoner. However, a ProPublica investigation[3] found that the algorithm was biased and categorized black defendants as having a higher risk of recidivism than white defendants (Larson, 2016). This case exemplifies the risks of using these technologies secretly, especially in public spheres where citizens should be able to know how decisions are made and how taxpayers' money is being used (especially in an issue as important as human freedom).

We can found other examples of bias in AI sistems at Google, where the company had to publicly apologize when its photo classification algorithm mistook images of black people for gorillas (Nieva, 2015). Amazon was also criticized for using an AI system to recruit new employees that was biased against female candidates (Lavanchu, 2018), and for selling facial recognition products to police with biases against women and black people (Singer, 2019). Moreover, Apple pay as well was also criticized for being biased against women (Knight, 2019).

These examples denote the potential risks that the use of AI could have without an appropriate human rights protection framework, and as its use keeps growing in a variety of different industries, the risks increase as well in important areas affecting even democracy itself, as we have seen with the Cambridge Analytica scandal and the spread of fake news through social networks that promote hate, disinformation and discrimination; or the use of Deep Fakes that might allow citizens to believe that a candidate said something that he did not really say, something known as false flag operations.

Consequently, private companies are having increasing influence and a unique power that allows them to even censor the president of the United States by suspending his social media accounts arguing that through them, he incited violence after his followers broke into the Capitol by force. (Alba & Koeze, 2021)

Although, even though democracy is paramount, probably one of the biggest risks of the use of AI today is the development of military weapons. The United States, for example, is using combat drones with a certain degree of autonomous functions, allowing these robots to make their own decisions when deciding who to shoot. (Voi, 2021)

Modern attempts to regulate AI

As we can have seen, the possible applications for the use of AI systems are so diverse that establishing a single framework for its applications is a challenging task. Various countries, international organizations, foundations, associations and private companies have already begun to develop the bases and principles regarding the ethical use of AI. However, these processes often take place in a reactive rather than a proactive manner, when errors and failures in the functioning of these systems generate criticism from citizens and the creators of the systems are forced to mitigate the damage.

Already in 1942, Isaac Asimov, considered one of the best science fiction writers, developed the three fundamental laws of robotics[4] in his short story “Runaround” which we could consider as one of the origins of an ethical regulation about robotics, and AI. However, and as often happens with technology, reality has surpassed fiction and the current framework about the ethics of AI has already largely surpassed these three laws. Among some of the most important recent milestones in the development of international AI regulations we can highlight:

· In 2015, Elon Musk (Founder of SpaceX and Tesla) together with Sam Altman (president of the start-up accelerator Y Combinator) concerned about the potential risk of AI, founded the non-profit association OpenAI, which seeks to promote and develop AI that benefits humanity.

· In 2016, companies like Facebook (now meta), Google, Microsoft, Amazon, IBM, among others, created a non-profit association called Partnership on AI, committed to the responsible use of AI.

· In 2017 during the AI Beneficial Conference organized by the Future of Life Institute, the Asilomar Principles were created, 23 proposed guidelines related to AI research, ethics and problems. Among them, we can mention that any decision of an autonomous AI system should accompany an auditable explanation; be compatible with the ideals of human dignity, rights, freedoms and cultural diversity; AI technologies must benefit as many as possible and their shared economic prosperity for the benefit of all humanity (Strerling, 2018).

· Also in 2017, the Barcelona Declaration for the proper development and use of AI in Europe was signed, for which various European experts in AI participated and highlighted six key points for the development of AI: caution when developing these technologies, reliability in order to test AI systems for their safety, accountability so that decisions made by AI are explainable, transparency for people to know if they are interacting with an AI or a human, the limited autonomy in order to delimit its application, and the duty to make clear the role that the human being plays.

· In 2018, Microsoft published the book The Future Computed: Artificial Intelligence and its Role in Society, where it raised the need to modernize legislation with special emphasis on the observance of sound ethical principles, training for new skills in the face of future needs of the working market.

· In addition, in 2018 Google established 7 AI principles after its employees complained that the search engine worked with the pentagon in the development of AI for military purposes. Its principles include: be socially beneficial, avoid creating or reinforcing unfair bias, created to be safe, be responsible with people, respect privacy principles, achieve high standards of scientific excellence and be available for uses that are in accordance with these principles (seeking to avoid potentially harmful uses such as the use of weapons, mass surveillance or violate human rights) (Pichai, 2018).

· Also in 2018, Argentina hosted the First Forum on Artificial Intelligence, Internet of Things and Smart Cities in Latin America. The Buenos Aires Declaration was the result of said Forum in which the importance of processing and managing data in an ethical manner and the importance of the use of AI in public services were emphasized.

· In 2019, the European Union High-Level Expert Group presented a series of guidelines for a trustworthy AI: legal (respect laws and regulations), ethical (respect ethical principles and values), and robust (both from a technical perspective and taking into account its social environment). The guidelines laid out a set of seven key requirements that AI systems must meet: human agency and oversight; technical robustness and security; privacy and data governance; transparency; diversity, non-discrimination and equity; social and environmental welfare; and accountability (AI and its results must be accountable to external and internal auditors).

· Moreover, the European Parliament issued a resolution referring to the European global industrial policy in the field of artificial intelligence and robotics. In the document, it highlights that European global industrial policy must be based on: retraining of workers in the industries most affected by the automation of tasks, ethical models from their conception with respect for human rights, and flexible regulatory frameworks to allow innovation.

· The Organisation for Economic Co-operation and Development established the OECD IA Principles. The recommendation identified five values-based principles for responsible AI: to benefit people and the planet, to respect human rights and include adequate safeguards, to have transparency so that people understand AI's decisions, to operate safely and potential risks are continually assessed and, finally, the organizations and individuals involved in its development and implementation must be held accountable for its operation.

· Also in 2019, the Institute of Electrical and Electronics Engineers (IEEE) produced a report[5] which highlighted, among the most important points, the need to involve manufacturers, governments and civil society to establish and recommend standards and ethical codes in all spaces. In addition, indorsed to ensure that AI is always under human control and establishing standards and regulatory bodies that monitor the development of AI.

· In 2021, the 193 Member States of Unesco (United Nations Educational, Scientific and Cultural Organization) adopted a recommendation on the ethics of AI, which sought to harness the benefits that AI brings to society and reduce its risks. It tried to ensure that digital transformations promote human rights and help to achieve the United Nations Sustainable Development Goals, addressing issues of transparency, accountability and privacy, with action-oriented political chapters on the governance of data, education, culture, work, health care and the economy. Among its points, it stands out: the protection of personal data (access to personal data records and a right to delete them), the prohibition of the use of artificial intelligence systems for social rating and mass surveillance, or the evaluation of the ethical impact of AI of member countries. The recommendation encourages States to add an independent AI ethics officer or mechanisms to oversee AI implementation, environmental protection through data, energy and data efficiency and resources that help combat climate change.

· In 2021, the European Commission presented its Proposal for a Regulation on a European approach to AI (Artificial Intelligence Act) with recommendations for the ethical aspects of artificial intelligence, robotics and related technologies. Europe proposed four levels of risks. At the highest level is "unacceptable risk” for AI systems considered a threat to people's security, livelihoods and rights. The second level is "high risk" for uses of AI in critical industries such as health, education, public services, legislation, immigration, administration of justice, etc. Next is "limited risk” which includes systems such as chatbots, which must have a minimum level of transparency and where users must be warned that they are talking to a machine. The minimum risk or “insignificant risk” includes the rest of the uses, such as video games, email filters, image applications or other AI systems that do not involve risks. In these cases, the new regulations do not specify any measures.

The proposal also deals with facial recognition and prohibits its use in public areas with some exceptions, such as the search for a missing child, or to prevent a specific and imminent terrorist threat subject to the authorization of a judicial body or other independent body and to the appropriate limits in time, geographic scope and the databases searched. The proposal is important to prevent what is happening in China, for example, where facial recognition is being used to identify activists in social protests and persecute them to silence dissenting voices among other questionable uses (Monzur, 2019).

In addition, we could also mention the proposals of other organisms such as: the Inter-American Development Bank (IDB), the World Economic Forum, the United Nations Centre for AI and Robotics, the AI for Good initiative, the Association for the Advancement of AI, the Forum on the Socially Responsible Development of AI, as well as the national regulations carried out by various States.


From the different documents briefly analysed, it is possible to extract some essential recommendations in relation to the regulation of AI and the ethical principles that these technologies must respect. Among them we can highlight:

In the first place, it is necessary to point out that many of the documents and principles analysed previously are not international or mandatory for the States, causing their application to not be implemented uniformly. In the same way that after the Second World War, the fear of a new war led States to sign international human rights treaties, today a global framework is equally necessary to regulate these technologies that affect and will continue to affect the entire world.

That regulatory framework must be flexible enough to allow innovation and the free development of new technologies, but at the same time, as the aforementioned Barcelona Declaration already highlighted, it must maintain a precautionary principle to be cautious with the development of new technologies of which we do not have sufficient knowledge about the possible risks that they could entail before they go on the market (European Environment Agency, 2002).

It is necessary that the teams in charge of generating AI systems be diverse in terms of gender, race, ethnicity, religion, ability, age, and academic training, as well as that there is a broad debate between different organizations so that countries, international organizations, companies, researchers, academics, experts, civil associations, as well as the general public can be part of the conversation and ensure that we are having a broad approach when creating, regulating and using these technologies in all phases of their development.

It is recommended that the decisions made by AI systems can be audited, so that there can be bias controls of the algorithms to make the system transparent and explainable.It will also be necessary to rethink the civil liability system as with greater autonomy of the machine, the human being could be less responsible. Moreover, the development and implementation of AI technologies must be accompanied by digital literacy policies and training for workers to keep them active and relevant in the labour market to prevent the increase of unemployment. As well, is necessary to carry out a risk management assessment, in the same way that the European Union proposed, so that, to greater the potential risk, there are greater controls.

Finnally, it is important to prohibit the use of AI in systems for military purposes, especially those with autonomous functions that allow the machine to make its own decisions.

Finally, I consider it necessary to always maintain human control over the actions of AI-based systems, which allows us to always have control over the technology, especially in systems capable of learning on their own and those whose software can be updated remotely. Thus, a safe AI focused on the human being that ensures respect for human rights, safeguard privacy and the protection of personal data. In short, an AI for the benefit of humanity.


As we have seen, the potential that AI has to change our lives is unprecedented in human history, where practically all industries and services could be affected by this new technology. This is why the development of AI should be encouraged to permit it to continue benefiting humanity, but doing so in a safe way.

As we mentioned, technology changes society and new technologies bring new ethical and legal challenges never seen before. For this reason, it is necessary a legal system up-to-date with the times in which we are living to respond to the needs, problems and new ethical dilemmas that AI entails. However, we must ensure that we take the appropriate steps to make sure those changes are positive, and have safe and consensual mechanisms in place to ensure that AI systems serve humanity and respect human rights.

We have already seen some of the consequences of not regulating these new technologies appropriately, and with AI becoming more and more a part of our daily lives, the need for a global agreement on the ethical principles of AI that includes governments, international organizations and private companies is more necessary than ever.

Fortunately, there are already several documents that have begun to lay the foundations and ethical principles on which AI systems should be based, but this task must now be global so that there are no countries left behind at risk. We should remain vigilant and closely monitor the possible risks that new AI systems may bring. In this way, it may be possible to generate an approach that encourages innovation, but in a cautious manner in order to guarantee that the future of AI is safe, human-centred, safeguards the protection of personal data, respects human rights and maintains the human in control of this technology instead of being controlled by it.


Alba, D; Koeze, E. (June 7, 2021) What Happened When Trump Was Banned on Social Media. The New York Times. Retrieved from: on 19/04/22.

AI Now Institute. (2018). AI Now Report. Retrieved from on 23/04/22.

Angwin, J.; Larson, J; Mattu, S: and Kirchner, L. (May 23, 2016) Machine Bias There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. Retrieved from: on 19/04/22.

Asimov, I. (1950). "Runaround". I, Robot (The Isaac Asimov Collection ed.). New York City: Doubleday. p. 40. ISBN 0-385-42304-7.

Barcelona Declaration for the Proper Development and Usage of Artificial Intelligence in Europe (2017) organized by Biocat, supported by the Obra Social la Caixa, Barcelona. Available at:

BBC News Mundo (September 3, 2019) "Billionaires Jack Ma and Elon Musk disagree on which is the greatest threat to humanity" La Nación. Retrieved from:

Boissier, O., Bonnet, G., Cointe, C., De Swarte, T., & Vallée, T. (2018). Ethics and autonomous agents. Building ethical collectives. Retrieved from on 19/04/22.

Bonnefon, J.F (2021) The Car That Knew Too Much. Can a Machine Be Moral? MIT Press. Retrieved from: on 19/04/22.

Bossmann, J. (2016). Top 9 ethical issues in artificial intelligence. In World Economic Forum. Retrieved from on 19/04/22.

Bostrom, N. (2014) Superintelligence: Paths, Dangers, Strategies. Zaragoza: Tell.

Bryson, J. & Winfield, A. (2017). Standardizing Ethical Design for Artificial Intelligence and Autonomous Systems. Computer, 50 (1), 116-119. Doi: on 17/04/22.

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … & Amodei, D. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. (First ed.). Future of Humanity Institute, University of Oxford, Arizona State University. Retrieved from on 19/04/22.

Dickens, C. (2012). A tale of two cities. Penguin Classics.

Chomsky, N. (1957). syntactic structures. Madrid: 20th century.

Economic Commission for Latin America and the Caribbean. ECLAC (2018) Data, algorithms and policies. Redefining the digital world. United Nations: Santiago.

European Commission (2016) “Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: “Digitalisation of European industry Reaping all the benefits of a digital single market” April 19, 2016. COM (2016) 180 final.

European Commission (2018a) “Communication from the Commission to the European Parliament, the European Economic and Social Committee and the Committee of the Regions: Artificial Intelligence for Europe”. COM (2018) 237 final. April 25, 2018. Brussels.

European Commission (2018b) “Communication from the Commission to the European Parliament, the European Economic and Social Committee and the Committee of the Regions: Coordinated Plan on Artificial Intelligence”. COM (2018) 795 final. December 7, 2018. Brussels.

European Commission (2019) “Communication from the Commission to the European Parliament, the European Economic and Social Committee and the Committee of the Regions: Building trust in human-centric artificial intelligence”. COM (2019) 168 end. April 8, 2019. Brussels.

European Commission (8 April 2019) Report: Ethics guidelines for trustworthy AI. Retrieved from: on 19/04/22.

European Economic and Social Committee (2017) Opinion of the European Economic and Social Committee on "Artificial intelligence: the consequences of artificial intelligence for the (digital) single market, production, consumption, employment and society" Official Journal of the European Union. C 288. 60th year. May 31, 2017, p. C288/1-C4288/9.

European Economic and Social Committee (2018) Opinion of the European Economic and Social Committee on "Trust, privacy and security of consumers and businesses in the Internet of things" Official Journal of the European Union. C 440. 61st year. December 6, 2018, p. C440/8-C440/13.

European Parliament (2017) "Resolution of the European Parliament, of February 16, 2017, with recommendations addressed to the Commission on civil law rules on robotics (2015/2103(INL))" Retrieved from: on 18/04/22.

European Parliament (2019) "Resolution of the European Parliament, of February 12, 2019, on a European global industrial policy in the field of artificial intelligence and robotics (2018/2088(INI))" Retrieved from: http://www.europarl on 18/04/22.

European Parliament (2021) Artificial Intelligence Act. Retrieved from: on 22/04/22.

Expert Group on Liability and New Technologies (2019) Liability for Artificial Intelligence and other emerging technologies. European Union.

First Forum on Artificial Intelligence and Internet of Things in Smart and Sustainable Cities in Latin America. “Declaration of Buenos Aires. Artificial Intelligence and Internet of Things in Smart and Sustainable Cities in Latin America”. (2018). ITU Forum Retrieved from: on 18/04/22.

García-Prieto Cuesta, J. (2018) “What is a robot?”, in Barrio Andrés, M. (dir.), Law of Robots. Madrid: Wolters Kluwer.

Harari, Y. (2019) “Who Will Win the Race for AI?” foreign policy magazine. Retrieved from: on 18/04/22.

Hawking, S. (2018). Stephen Hawking at Web Summit 2017. Retrieved from on 19/04/22. Available as of September 4, 2018.

Héder, M. (June 2021). "AI and the resurrection of Technological Determinism" (PDF). Információs Társadalom (Information Society). 21 (2): 119–130. doi:10.22503/inftars.XXI.2021.2.8.

Hintze, A. (November 13, 2016) “Understanding the four types of AI, from reactive robots to self-aware beings” The Conversation. Retrieved from: on 19/04/22.

Institute of Electrical and Electronics Engineers. (2019). Ethically aligned design A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. (First ed.). IEEE StandardsAssociation. Retrieved from on 20/04/22.

International Federation of Robotics (2018) The impact of Robots on Productivity, Employment and Jobs. Frankfurt: International Federation. of Robotics.

Katz, Y. (1 November 2012) “Noam Chomsky on Where Artificial Intelligence Went Wrong” The Atlantic. Retrieved from: on 17/04/22.

Laichter, L. (2019). Ethics of Autonomous Systems. Annotated bibliography of recommended materials. Retrieved from on 20/04/22.

Larson, J., et al (May 23, 2016). “How we analyzed the COMPAS recidivism algorithm” ProPublica. Retrieved from: on 17/04/22.

Lavanchu, M. (2018) Amazon's sexist hiring algorithm could still be better than a human. IMD. Retrieved from: on 20/04/22.

Lopez de Mantaras, R. (2019). Artificial intelligence: fears, realities and ethical dilemmas. Interactive Magazine. In Interactive. Retrieved from on 18/04/22.

Macías, C., Fernández, A., Méndez, C., Poch, J., & Sevillano, B. (2015). human intelligence. A theoretical approach from philosophical and psychological dimensions. Journal of Scientific Information, 91 (3), 577-592. Retrieved from: on 19/04/22.

Mattingly-Jordan, S., Day, R., Donaldson, B., Gray, P., & Ingram, I. (2019). Ethically Aligned Design. First Edition Glossary. A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. (First ed.). IEEE StandardsAssociation. Retrieved from on 19/04/22.

Mazo, E. (August 18, 2018) “The 100 companies with the highest stock market value” Expansión. Retrieved from: on 17/04/22.

McCarthy, J. (11 November 2007). “What Is Artificial Intelligence” Stanford University. Basic Questions section. Retrieved from: on 20/04/22.

McCarthy et al. (August 31, 1955) “A Proposal for the Dartmouth Summer Research project on Artificial Intelligence.” Stanford University Retrieved from: on 28/04/22.

McKinsey Global Institute (2017) A Future That Works: Automation, Employment, and Productivity. McKinsey Global Institute.

Microsoft (2018) The Future Computed: Artificial Intelligence and its Role in Society. Redmond, Washington: Microsoft Corporation.

Monastery, A. (2019). Ethics for machines: similarities and differences between artificial morality and human morality. International Journal of Applied Ethics, 30, 129-147. Retrieved from on 17/04/22.

Monzur, P (July 26, 2019) In Hong Kong Protests, Faces Become Weapons. The New York Times. Retrieved from: on 22/04/22.

Mourelle, D. (2019). Amazon vs. Microsoft: Tech companies enter the defense industry. In The World Order. Retrieved from on 20/04/22.

Nieva, R. (July 1, 2015) Google apologizes for algorithm mistakenly calling black people 'gorillas'. CNET. Retrieved from: on 19/04/22.

Palmerini (2017). “Robotics and Law: Suggestions, Confluences, Evolutions in the Framework of a European Research” Revista de Derecho Privado n° 32, pp. 53-97.

Paniagua, E. (2019) Future Trend Forum. Artificial Intelligence. Madrid: Bankinter Innovation Foundation

Perasso, V. (October 12, 2016) "What is the fourth industrial revolution (and why should we care)" BBC Mundo. Retrieved from on 17/04/2022.

Pichai, S. (Jun 07, 2018) AI at Google: our principles. Google Blog. Retrieved from: on 19/04/22.

PwC (2018) Fourth Industrial Revolution for Earth. PwC. Retrieved from: on 19/04/22.

Roldán Tudela, JM et al (2018) Artificial Intelligence applied to Defense. Spain: Spanish Institute for Strategic Studies.

Russell, et al (July 28, 2015) “Autonomous Weapons: An Open Letter from AI & Robotics Researchers” The Future Life Institute. Retrieved from: On 18/04/22.

Russell, S. and Norvig, P. (2004) Artificial Intelligence. A Modern Approach. Second edition. Madrid: Peardson Education SA.

Salesforce (July 2, 2018) “Machine Learning and Deep Learning: Learn the Differences.” Salesforce. Retrieved from: on 19/04/22.

Scherer, M. (2016). Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies. Harvard Journal of Law & Technology, 29 (2), 354-400. Doi: on 17/04/22.

Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Science, Vol. 3, (3), p. 417-457.

Shane, S; Wakabayashi D. (April 4, 2018) ‘The Business of War’: Google Employees Protest Work for the Pentagon. The New York Times. Retrieved from: on 19/04/22.

Singer, N. (24, Jan 2019) Amazon Is Pushing Facial Technology That a Study Says Could Be Biased. Retrieved from: on 19/04/22.

Smith & Marx, Merrit Roe & Leo (June 1994). Does Technology Drive History? The Dilemma of Technological Determinism. The MIT Press. ISBN 978-0262691673.

Strerling, B. (Aug 13, 2018) The Asilomar AI Principles. Wire. Retrieved from: on 18/04/22.

Sychev, V. (2018) “The Threat of Killer Robots” The UNESCO Courier. Artificial intelligence. Promises and Threats, number 3, pp. 25-28.

Torrero, J. (2019). Ethics and artificial intelligence: Silicon Adam and Eve biting the forbidden apple. In Nobbot. Retrieved from on 19/04/22

Torres, M. (2019) Rights and challenges of Artificial Intelligence Buenos Aires: CyTA.

Turing, A. (1950). Computing machinery and intelligence. Oxford University Press on behalf of the Mind Association.

Varangaonkar, A. (2018). The ethical dilemmas developers working on Artificial Intelligence products must consider. In Packt. Retrieved from

Vecchione, M. (2018) Technology for Good - A Novel Approach ITUNews Magazine. Artificial Intelligence for Good in the World, pp.11-15

Villalba, J. (2020) Algor-ethics: ethics in artificial intelligence. Faculty of social and legal sciences. National University of La Plata. UNLP. Year 17/No. 50-2020. Annual. Printed ISSN 0075-7411-Electronic ISSN 2591-6386. Retrieved from: on 19/04/22.

Vinge, V (1993) VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, March 30-31, 1993. Department of Mathematical Sciences San Diego State University. Retrieved from: on 19/04/22.

VOI Editorial Team (30, May 2021) United States Tests The Skyborg, An Artificial Intelligence Fighter Jet Drone. VOI. Retrieved from:

Yaffe-Bellany, D. (October 24, 2019) “Quantum Computing Explained in Minutes” The New York Times, Science & Technology Section. Retrieved from:

Knight, W. (Nov 19, 2019) The Apple Card Didn't 'See' Gender—and That's the Problem. Wired. Retrieved from: on 19/04/22.

[1] Available at: [2] An Open Letter on AI was sign in 2015 by different experts. The letter is available at: [3] Available at: [4] The three fundamental laws of robotics of Asimov stated that: 1. A robot will not harm a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given by humans except if these orders conflict with the 1st Law. 3. A robot must protect its own existence to the extent that this protection does not conflict with the 1st or 2nd Law. [5] “Ethically aligned Design: a Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems”

bottom of page