Research Article | | Peer-Reviewed

About Some Socio-economic Problems and Risks of Artificial Intelligence

Received: 1 March 2024     Accepted: 18 March 2024     Published: 11 September 2024
Views:       Downloads:
Abstract

Article analyses some socio-economic risks related to application of artificial intelligence (AI) in several fields of activity. Also, existing gaps in legal regulation of activities related to artificial intelligence are investigated. Article clarifies issues related to determining the division of liability for certain legal consequences resulting from artificial intelligence activity. Also, norms and principles to be adhered to in order to protect personal data during application of AI are demonstrated. As one of the concerns among people regarding artificial intelligence, article notes the importance of provision of transparence and accountability of this technology. Simultaneously, article interprets problems arising from relations of artificial intelligence and intellectual property, as well as recognition of property rights for intellectual products created via AI. Also, macro and micro-level impact of artificial intelligence on economy is analyzed. Attention is paid to issues such as productivity, competition, changes in the nature of the labor market, the increase in unemployment, and the deepening of social and digital inequality as a result of the application of this technology. Moreover, advantages and risks of human-robot collaboration are evaluated. Article demonstrates the biggest threats of artificial intelligence – creation of fake content, misinformation and hence, creation of significant problems. Prevention methods of those threats are interpreted on technological and legal planes. Also, risks of application of artificial intelligence in critical fields such as military and health are characterized.

Published in International Journal of Science, Technology and Society (Volume 12, Issue 5)
DOI 10.11648/j.ijsts.20241205.11
Page(s) 140-150
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Artificial Intellect, Robot, Digital Inequality, Personal Data, Fake Content

1. Introduction
As a result of the emergence of smart systems, devices, robots and their transformation into an integral part of human life, it is forecasted that civilization will enter a new stage of development. The further intellectualization of the Internet, the emergence of collective artificial intelligence, as the next stage of the development of artificial intelligence technologies, forms the basis of a new socio-technological society for the first time in human history.
Artificial intelligence is rapidly entering human life and supports the development of society. At this time, the issues of collecting and processing large amounts of information from smart devices such as sensors, video cameras, etc. connected to various objects become relevant. Artificial intelligence is already becoming the most in-demand technology in the world. Application of artificial intelligence in economy, management, manufacturing, service, healthcare, education, security, transportation and other fields is becoming more and more widespread. Rapid development of artificial intelligence technologies is predicted all over the world. Countries that are leading in the field of development, export and application of artificial intelligence technologies, at the same time are world leaders in the political, military and economic fields. Many countries adopt development strategies related to artificial intelligence, pay great attention to the development of this field, and try to increase global competitiveness.
According to the results of the study conducted by data analysis company PwC as a result of the acceleration of development and adoption of artificial intelligence, global GDP is forecasted to increase by 14% by 2030, contributing to global economy in the amount of 15.7 trillion dollars . It is forecasted that the next wave of the digital revolution will start with the help of data from Internet of Things (IoT), which is several times larger than the data generated by the current "Internet of People". Widespread adoption of artificial intelligence in the economy is expected to improve standardization and automation, in addition to personalization of products and services.
However, despite all the advantages of artificial intelligence for human activity, there is a number of important problems and risks related to its application and development. Some of those problems and risks generate significant obstacles for development of artificial intelligence, and others lead to serious distress and anxiety in society and among people. Article groups and classifies problems and risks, explains the causes of their emergence, and dangers and concerns they generate. Considering existing practices and approaches, proposals are introduced to solve the mentioned problems and prevent risks.
2. Legal Problems Related to Application of Artificial Intelligence
Modern challenges require the legal system of all states to respond to the technological challenges of the modern era, including adequate development of artificial intelligence technologies. First of all, there is a need to develop regulatory mechanisms for the sustainable development of the artificial intelligence industry. Countries interested in gaining maximum benefits from artificial intelligence and related technologies are making progress in legal regulation of the relevant field. This roots from use of drones, robots and other different devices and program systems in daily lives and professional activities. It must be considered that modern technologies can damage society, people’s health, property, etc. Hence, attention to this matter must be amplified and an adequate normative-legal basis must be established.
It must be noted that, due to its global character on virtual platform, artificial intelligence technologies can create significant problems both on national and international levels. Accordingly, countries must combat possible future threats together and closely participate in their prevention.
Overall, analysis of tendencies in AI development and its application field allows to determine two types of risks – direct and indirect risks related to relevant technologies that can lead to certain legal consequences.
Direct legal risks of using artificial intelligence – are risks related to direct impact of these or other threats emerging as a results of the AI use . These risks include following:
Deliberately posing a threat for people's lives and health, constitutional rights and freedoms, honor and dignity, violating public security, acting against the state using the artificial intelligence system.
Indirect legal risks of using artificial intelligence – are risks related to unexpected threats while using artificial intelligence. These risks include following:
1. accidental errors in AI system software (errors made by the AI system developer);
2. errors made in AI system operation process (errors made by AI system).
2.1. Liability Issues for AI Activity
Determining the legal liability for AI activity is among main problems in this field. Despite a significant evolution of ideas regarding AI usage ethics, main obligations remain unchanged since the time of Isaac Azimov: preventing the damage to people, their property and the robots themselves .
Currently, there are hardly any considered social and legal liability issues related to AI applied in complex mathematical calculations and large data analysis, different manufacturing and service processes on national and international levels. However, not all of the processes performed with application of AI are completed on a desired level.
Let’s look at surgeries performed by robot-surgeons. Number of surgeries performed by medical robots is rapidly rising on a global level. At the same time, there are many facts of malfunction or errors of robot-surgeons . However, legal systems are still undetermined about who is liable for those faults.
AI related liability is an important legal matter. As AI systems develop, it might be difficult to determine who might be liable for them. it is necessary to adopt separate laws and regulations to determine liability and establish liability for actions of AI systems of people and organizations.
It must be noted that the Resolution 2015/2103 (INL) of the European Parliament dated February 16, 2017, regarding the civil-legal regulation of robotics mentions the impossibility of holding artificial intelligence responsible for actions that harm third parties . According to that resolution, manufacturer, operator, owner or user of artificial intelligence may be liable for the damage caused by it.
The problem of dividing liability for AI activity among its potential subjects - manufacturer, owner and direct user is one of the most complex legal problems of artificial intelligence and related technologies. The solution to this problem also requires a precise balance of the interests of citizens, business entities and the state.
As leading global companies increasingly integrate AI into their products and systems, the potential for the technology to damage this process and property grows. Independent operation and decision making capability of AI creates new legal problems. As currently no laws cover the damages caused by AI, plaintiffs and courts have begun to test the application of traditional legal theories and principles to injuries related to artificial intelligence products such as driverless vehicles and worker robots .
Relevant legal principles include following:
1. liability principle based on product quality;
2. information protection and confidentiality principles;
3. fairness principle;
4. objectives’ determination principle.
2.2. Personal Data Protection Problems During AI Application
For purposes of AI, companies must comply with international legal norms and adequate in-country laws while using personal data . Data protection laws, exist in almost all countries and overall cover collection, use, processing, disclosure, storage and protection of personal data. These laws can also limit the exchange of personal data between countries.
2.2.1. Issues Related to Fairness Principle
Most data protection laws require the companies to process personal data in a fair manner. The fairness principle aims to protect the rights of individuals and allow companies to decide how to use their personal data. For this, organizations are required to:
1. convey in a clear, simple and transparent manner to the individuals how the organization is going to collect, use and process their personal data;
2. Taking measures to prevent discrimination against individuals.
Complexity of methods used in AI can create problems in compliance to fairness principle by the organizations. For example, machine training algorithms can reflect the faults of their developers. Also incomplete data, data anomalies and errors in algorithms can lead to faulty results.
2.2.2. Issues Related to Specification of Objectives
Data protection laws usually require following form the organizations:
1. collecting personal data for certain, disclosed and legal purposes;
2. excluding limited cases, not processing data in a manner that is inconsistent with those purposes.
2.2.3. Problems Related to Data Minimizing Principle
Minimizing data requires following from the organizations:
1. not using more than necessary data in order to achieve indicated processing objectives;
2. minimizing data storage time.
2.3. Transparency and Accountability Issues of AI
One of the most important legal issues related to artificial intelligence is transparency and accountability. AI systems can be non-transparent and difficult to understand, making it difficult for people to comprehend how decisions are made. Specific laws and regulations are needed to ensure that AI systems are transparent and explainable, and that people can challenge decisions made by AI.
For example, after mass-use of ChatGPT many started asking questions about how this model developed by OpenAI was trained . Currently, OpenAI does not want to publish and transparentize this process. Public does not know how ChatGPT was trained, what data was used, where the data was taken from or the details about its architecture. Absence of transparence and accountability regarding this technology is justifiably concerning.
There are several approaches to ensure AI transparency :
1. Open source: sharing AI system codes and algorithms as an open source, allowing both researches and users to check and understand them.
2. Descriptive AI: descriptive AI develops algorithms that clearly explain approaches, decision and training processes.
3. Standards and regulations: industrial standards and regulations can be applied in order to ensure transparency of AI. This helps to understand how AI systems work and decisions are made.
4. Audit and evaluation: independent audit and evaluation procedures helps to ensure fair and ethical operation of AI systems.
5. Protection of personal data: transparency must be complied with and personal data collection, processing and utilization methods must be clearly conveyed in order to protect the confidentiality rights of the users.
2.4. AI and Intellectual Property Issues
Intellectual property systems were developed to stimulate human innovation and creativity. However, new issues have been arising regarding intellectual property issues after wide application of AI technologies in intellectual creativity processes :
1. Are intellectual property incentives needed for AI-based innovation and creativity?
2. How to find balance between human innovation and creativity on one hand and AI innovation and creativity on another?
3. Does emergence of AI require changes in intellectual property protection mechanisms?
4. Is it necessary to change the existing intellectual property system in order to ensure balanced protection of AI created works and inventions?
Recently, scientific, literary and art works are created via different AI programs and used. Simultaneously, students and researches use scientific texts provided by AI programs in their own works. All these processes raise the issue of intellectual property rights. A question emerges: who are the authors of those works – AI developer, AI program itself, AI program proprietor or its user?
One of the most controversial issues is the performance of intellectual or creative activities while using artificial intelligence technologies, or whether such creativity is related to a unique human characteristic.
Besides, use of recently popularized ChatGPT in creation of scientific articles by students and researches creates significant problems . Creation of such opportunities while there are technological and legal problems in the field of combating plagiarism increases the problems in this field even more.
If we view the AI as the object of intellectual property right, then this necessitates protection of AI through intellectual property laws.
As shown above, there is no universal legal system to protect intellectual property developed or supported by AI. Legislation regarding property rights in AI-generated products is still evolving and there are open issues in this area in many countries. However, there are several approaches to protect property rights for AI-generated products :
1. Human authorship: accept that AI is only a tool and human is the true author of a creative work.
2. AI authorship: Recognize AI as an independent author. However, it may cause complexities within existing legislation, as authorship rights are only given to humans in many countries.
3. New legislation: apply regulations for private property rights to AI-generated works.
4. Licensing and patenting: providing product licensing or patenting rights to the companies that provide AI creativity capabilities. For example, US Patent and Trademark Office (USPTO) recognizes AI as a type of patent in its patent classification system .
5. Public domain: automatic classification of AI-generated products into public domain, i.e. being open for free use by everyone.
Each approach has its justification, advantages and challenges. Future evolution of AI creativity legislation depends on future development of the technology and how the public opinion will be formed.
3. Economic Problems of AI Application
AI plays an important role in economy and already impacts economy in different ways. There is now fierce competition worldwide to reap its benefits. Artificial intelligence is seen as a driver of productivity and economic growth. This technology can increase the efficiency of performed work and significantly improve the decision-making process by analyzing large amounts of data. It can also spur the creation of new products and services, markets and industries. Doing so, it can increase the consumer demand and create new revenue streams.
But AI can be damaging to economy and public. Some experts warn that one of the biggest harms of artificial intelligence could be the creation of giant monopolistic companies – centers of wealth and knowledge – that could have a detrimental effect on the economy . AI may also widen the gap between developed and developing countries and increase the need for workers with certain skills while eliminating the need for others. The second trend may have negative consequences for the labor market. Experts also warn of AI's potential to increase inequality, lower wages and narrow the tax base . While these concerns remain relevant, there is no consensus on whether and to what extent the associated risks will materialize.
PwC data analysis company demonstrates two main channels through which AI will impact global economy . First channel is AI that increases near-term productivity based on automation of daily tasks that can impact capital-intensive sectors such as manufacturing and transportation. This covers wide use of technologies such as robots and autonomous transportation means. Second, based on predictions, productivity will increase thanks to facilities that support and assist existing workforce using AI. Automation will partially eliminate the need for workforce overall, result in increased productivity.
McKinsey data analysis company informs that AI significantly impacts and has a remarkable commercial potential in sectors such as marketing and sales, supply chain management, logistics and manufacturing . The results of a survey conducted by Boston Consulting Group show that transportation, logistics, automobile and technology sectors are already at the forefront of AI application . According to the calculations of the PwC company, all sectors of the economy will gain a profit of at least 10% by 2030 thanks to AI .
Currently, the level of AI changes globally, a surge in difference between advanced and underdeveloped countries is increasing. Mainly, leading AI companies located in developed countries will become more powerful in comparison with their counterparts in developing countries.
3.1. AI Impact on Labor Market
Throughout the history of humanity, people have always looked for helpers and tried to lighten their work. Humans have always created new devices, mechanical systems, etc. to achieve their various goals and dreams. At all times, they tried to live better, to gain new opportunities, to be in a ruling and dominant position in nature and society. All this has led to the formation of the labor market in accordance with the realities of each era and level of development, creation of new professions, and at the same time, elimination of some professions. In the 21st century, humanity has the opportunity to create "intelligent technological servants" for itself. Today, robots and drones based on artificial intelligence can be considered as "smart technological servants" of people.
AI Technologies take over different functions that are inherent to human intelligence, labor and activities. Hence, a new labor market is formed under the impact of objective development process, which requires new capabilities, skills, mentality and behavior from people. Today, in our country, widespread use of online services in manufacturing, security, transport, trade, public catering and other fields under the influence of artificial intelligence technologies is forming a new segment of the labor market on the digital platform. Workplaces are already becoming virtual, opportunities to operate online in various fields are increasing rapidly.
Characteristics of the modern labor market completely differ from previous periods. Nowadays, every citizen, having certain skills and experience, can perform one or several jobs using a mobile phone. Mobile devices are not just a communication tool, but they have also become a labor tool and a workplace. People gain profit by rendering education, advertising-marketing, consulting and commercial services in social networks. Due to rapid development of information and communication technologies and artificial intelligence, people with physical disabilities are also engaged in the labor market and inclusive labor, realize their potential by working over the Internet and gain opportunities to become an active member of workforce. At the same time, the scope of the virtual labor market is not limited to a single country. People join the global virtual labor market, secure jobs in foreign countries and meet their financial needs.
Development of AI significantly reduces the cost of traditional automation and creates new opportunities for intellectual automation . Traditional automation technologies might cause sharp increase of labor productivity, however specific and homogenous parameters only allow to perform simple and repeating tasks. Unlike previous periods, intellectual automation period has created a new virtual workforce that can be considered a new manufacturing factor. On one hand, on current stage of manufacturing, this tendency reduces the dependence on manual labor and encourages labor replacement . On the other hand, due to its learning and self-updating features, AI can effectively solve the complex labor needs of many real-life automated jobs.
Currently, most people’s concerns are related to impact of automation on their job and those around them. This is an understandable concern because for most people, their job is more than just a source of income. It is also a source of prestige, sense of value, commitment to life and status in society. As technologies such as AI, robotics and automation are widely applied in economy, creation of new workplaces, including elimination of existing workplaces (or their replacement with new ones) will become inevitable. Based on the forecast of Bruegel brain center, 54% of workplaces will face the risk or possibility of computerization . Researchers think that as a result of AI impact, changes will happen in characteristics and scope of workplaces in different economic sectors, which will require re-qualification of important workforce.
The share of jobs characterized by routine activities or requiring a low level of computer skills is expected to fall to 30% of total employment by 2030 (vs. 40% now). Such changes may affect the level of wages . Specialists predict that approximately 13% of salary fund will be transferred to complex, non-repeating jobs category. Jobs with repeating tasks (lack of digital skills or low skills) will comprise only 20% of global salary in near future in comparison with currently existing 33%.
Application of AI can also impact salaries, income distribution and economic inequality. Increasing demand for highly skilled workers that can use AI can significantly increase their salaries, middle- and low-skilled workers can face reduced salaries or unemployment. Therefore, changes in labor demands can affect total wages and worsen the total income distribution. In theory, the more AI solutions replace day-to-day labor, the greater the increase in productivity and overall income, and the steeper the rise in inequality.
However, some experts believe that AI will have the hardest time replacing the "sensory-motor skills" of workers who do non-standard and unusual jobs, such as security guards, cleaners, gardeners, cooks, etc.
Thus, while AI has significant potential to increase economic returns and productivity, it also poses equally serious risks, such as labor market polarization, rising inequality, and unemployment.
Some governments adopt special programs in order to eliminate these problems. Unemployment support is among those programs. Implementation of relevant training programs is another policy. That is, low-skilled workers and those who will lose their jobs due to AI can be trained to acquire new skills and improve existing ones. Such programs could enable workers to stay employed in AI-dominated fields of activity.
Tax measures are proposed as another policy tool to save jobs against robots . Such a tax - known as a "robot tax," could help workers keep their jobs by discouraging companies from hiring more robots. So far, taxing robots seems technically and economically complicated and not widely supported. However, this policy is a step that can hinder technological development, economic productivity and increasing the quality of production and services.
In general, the issue of taxation of robots is currently in the focus of attention of many countries and experts . It should be taken into account that currently, the majority of tax revenues in all parts of the world are generated by individual persons’ income tax. The income of individual persons mainly comes from a labor activity. Therefore, many companies have started to prefer robot-workers to avoid such taxes. Hence, the experts believe that the reduction observed in the field of individual person’s income tax should be compensated by the robot tax.
If the impact of AI on labor market can increase unemployment, this problem can be solved using following methods:
1. Re-structuration of education and training: i.e. people must gain skills that cannot be replaced with AI or automation. Education programs focusing on “soft skills” such as creative thinking, solving complex problems and human relations must be offered.
2. Life-long learning: Employees must be encouraged to develop habits of continuous learning and self-development, and resources must be provided for career changes and educational opportunities.
3. Strengthening social security: Social security systems should be expanded and strengthened against the possibility of increased unemployment.
4. Economic diversification: Encouraging economic growth in various areas can help increase job opportunities at the local and national levels.
5. Taxing automation: Taxes from highly automated companies can be diverted to unemployment funds, which can be used as a source of support for the unemployed.
6. Creating new jobs: Public and private sector can develop programs aimed at creating new jobs in the new fields created by AI.
3.2. Risks of Using Robots in Employee Selection
Modern workplaces are becoming increasingly dependent on AI to perform certain human resources and employee management functions. Also, some advanced companies use AI while selecting candidates, interviewing and hiring in order to prevent discrimination and bias during recruitment process . However, the use of artificial intelligence brings its own risks and does not fully insure an employer against claims of discrimination and bias. The Internet, social media, and public databases used by some AI tools typically provide information about job applicants and employees that an employer cannot legally ask (religious, ethnic, political views, etc.). This leads to the violation of certain provisions of human rights.
4. Risk of Digital Inequality Created by AI
One of the negative trends related to the rapid development and wide application of artificial intelligence is the increase in the digital divide between countries. Each country has its own opportunities and characteristics, with different development levels and potentials. Accordingly, each country's capabilities and strategies for implementing artificial intelligence will be different. Experts believe that as a result of the application of artificial intelligence, developed countries can benefit by 20-25% (compared to today), and developing countries can benefit by only 5-15% .
It is inevitable that the uneven distribution of benefits from the widespread application of artificial intelligence will occur at the micro level, in enterprises and organizations. AI technologies could double their cash flow by 2030 – likely leading to more hiring . Those companies that are unwilling or unable to adopt AI technologies at the same pace could become bankrupt. In fact, companies that don't use AI may experience a severe drop in cash flow as they lose market share, facing a pressure to lay off workers.
AI also threatens to create and deepen social inequality among individuals. For example, using AI capabilities, people increase their knowledge and education level, try to become competitive in the market, build a successful career, ease make household issues and create personal business opportunities. However, not everyone can benefit from these opportunities created by AI. That is, these technologies are not available for everyone. For this purpose, first and foremost a good-quality Internet connection and necessary knowledge must be provided. However, people's financial situation, level of education and surrounding infrastructure do not always allow this. As a result, society has to face the next wave of digital inequality and a new form of social inequality in the age of artificial intelligence. In order to eliminate or minimize this inequality, it is necessary to implement government’s social support projects. It is necessary to strengthen relevant education and training programs, increase the inclusiveness of artificial intelligence technologies and update public policies in this field. Such steps are essential to ensure a future in which the benefits of artificial intelligence are shared by all of society.
5. Issues of Human-Robot Collaboration
As the robots develop and gain the ability to perform tasks that were once exclusive to people, issues of their collaboration with humans become more relevant. It is expected that in currently developing socio-technological society humans can collaborate with robots in order to create new products or render services and become colleagues with them, which requires new communication and behavior regulations . Certain unpredictable and unimaginable prospects may emerge as a result of collaborations of collective natural intelligence and collective artificial intelligence, their communication with each other, being useful to one another, causing damage in some cases and creating certain threats. Naturally, there is a number of futurological ideas in this regard.
It must be noted that, modern intellectual robots are equipped with sensors and algorithms that allow collecting large volumes of data, which facilitates determining templates and tendencies .
This data can be used to make informed decisions, improve processes and improve overall efficiency. Robots can be customized and programmed to perform specific tasks. All this ensures that robots are more efficient than humans in performing a number of tasks. This, in turn, can help increase overall productivity and meet the specific needs of each business area.
5.1. Advantages of Human-Robot Collaboration
One of the main advantages of human-robot collaboration is increased efficiency . Robots can perform tasks faster and more accurately than humans, allowing people to focus their attention and time on more complex and creative tasks. For example, robots can be used for automate repeating tasks in manufacturing and installations, which allows people to focus on higher level tasks such as quality control and problem-solving.
Another advantage of human-robot collaboration is safety. Robots can be used in dangerous environments such as construction sites or disaster areas, which reduces the injury risk of humans. They can also be programmed to work with dangerous materials or perform dangerous tasks for human such as studying the depth of water reservoirs.
Alongside with robots, humans can also benefit from advanced collaboration. For example, robots can be programmed to cooperate with humans in order to perform tasks more efficiently in factory setting. This approach can lead to increased productivity and reduced costs.
5.2. Risks of Human-robot Collaboration
Robots can automate certain tasks, making certain positions unnecessary thus leading to unemployment and reduced economic security of the worker. They can also collect and safely store large volumes of information that is vulnerable against hacker attacks or misuse. This requires strong confidentiality and security measures in order to ensure protection and prevention of misuse of sensitive information. As robots integrate with workforce, certain norms and ethical rules become increasingly necessary.
Another problem of human-robot collaboration is ethical issues. For example, use of robots in military operations, confidentiality of personal data, robot ownership and control issues lead to complex ethical dilemmas.
Despite many advantages of human-robot coexistence, there are still technical limitations to be eliminated . For example, robots are still unable to handle tasks that require human-like dexterity and decision-making, such as handling delicate objects or making ethical judgments. Until these limitations are eliminated, efficient collaboration opportunities of humans and robots will remain limited.
Use of smart robots that can cooperate with humans (for example, drones, automated floor cleaner, autonomous industrial carts etc.) creates certain legal problems for employers in labor protection and accident investigations field . These include:
1. provision of safe work environment in accordance with work safety regulations;
2. development of adequate regulations on safety systems most commonly used to reduce threats related to robots and robotic systems’ characteristics that produce unusual threats;
3. investigation of the causes of accidents prompted by robots ("black box" problem);
4. determining whether compensation for damage inflicted on workers by robots should be paid by its manufacturer or its user.
Overall main principles ensuring human-robot collaboration cover following :
1. Interoperability: Development of standards and protocols for development of effective communication and collaboration of human-robot systems.
2. Security: provide security standards in human-robot collaboration, also minimize accident and injury risks.
3. Labor division: Considering strong traits of humans and robots, effective division of tasks.
4. Mutual understanding: enable robots to understand intends and behavior of humans, assists humans in understanding robot behavior.
5. Learning and adaptation: Robots learning from human peers and adaptation to different working environments.
6. Ergonomics: Designing work environment for comfortable and effective activity of humans and robots.
By applying these principles, human-robot collaboration can become more effective and productive.
6. Risks Related to the Ability of AI to Create Fake Content
Capabilities of generative AI technologies create a fertile ground for creating fake content (text, audio, video, photo). Technologies created for developing such fake content are called DeepFake .
The future development of DeepFake and related technologies raises major concerns, both ethically and legally. The speed, affordability and scale of application of such technologies are major concerning factors. Such frauds can be used to damage people's reputations, interfere with elections, manipulate consumer behavior, and even tamper with evidence in court.
Vice-president of European Commission Vera Jourova thinks that advanced Technologies such as ChatGPT are able to create complex and visually substantiated content and visual images in several seconds . According to her, photo generators can generate authentic-looking photos of events that never happened and voice generating software can imitate a human voice based on a few second sample.
As DeepFake technologies improve, their use in the criminal sphere also expands. According to experts, DeepFake can create a serious problem in terms of protecting national security, statehood and public order . For example, a fake video appeal on behalf of Ukrainian President V. Zelensky was posted on the Internet in 2022 . In that video appeal, Ukrainian army was asked to surrender to the Russian troops. Imagine what would have happened if the army surrendered believing that fake video appeal.
Preventing the creation of fake content by AI technologies and taking certain measures regarding already created content has already become one of the relevant issues that concern all mankind. First, IT companies must invest in creating solutions (Anti-DeepFake) that can identify DeepFake content. In turn, government agencies, social network and media managers must strengthen verification procedures to distinguish between real content from fake content. Finally, the public must be constantly informed about the dangers and limitations of DeepFake. It is important to prevent people from losing trust in videos and photos. From legal practice standpoint, this poses no less danger than the fake content itself. Because visual materials act as an important part of the evidence base when working in court.
European Union has already required tech platforms including Google, Facebook, YouTube and to detect and publicly label AI-generated photos, videos and texts for users . This is a part of European Commission's effort to fight disinformation.
Some countries are making efforts to solve fake content creation issues on legislation level. In China, such videos are labelled . In some US states, including California pre-election distribution of fake content of politicians is prohibited . In France, sanctions are applied for editing others’ speeches or images without their agreement .
So, fake content creation capabilities of AI must be taken seriously. Elements such as deepfake videos, fake news and texts created using AI can lead to disinformation and trust issues. These problems can be prevented using following methods :
1. Education and training: Educating programs and educating campaigns can be organized to help people detect fake content.
2. Technological solutions: AI-based tools must be developed to detect fake content. These tools can determine non-authentic content via audio-visual content analysis.
3. Legal regulations: Legal framework can be developed to limit production and distribution of fake content.
4. Platform liability: Social media platform and other content distributors update their policies and algorithms in order to provide more control over published content.
5. Checking tools: Users can use independent fact-checking services and checking tools to check the authenticity of content.
Combination of above-mentioned approaches can help overcome the problem of fake content and maintain trust in the digital environment.
7. Risks of AI Application in Military Field
One of the most important AI risks and threats is related to application of relevant technologies in the military field. Therefore, the correct principles of using artificial intelligence for military purposes must be defined.
The most effective and correct way to deal with the possible consequences of military robotics equipped with artificial intelligence is not to completely ban its use for practical and ethical reasons. The robotics revolution has the potential to change many areas of social life, and modern developments in the military field cannot be imagined without artificial intelligence. Thus, artificial intelligence has great benefits in terms of reliable protection of manpower, accurate identification of targets, prevention of unnecessary losses and destruction, reduction of defense costs, etc. Given that many nations have already invested heavily in the development of combat robots and robotic weapons, a complete ban on such military technologies is difficult.
Simply, robotized weapons and automated military networks must be designed in such way that they could not become unpredictable (out of control) . For this reason, development of a standard is necessary for combat robots and overall, developing international robotics industry. At the same time, it has become necessary to adopt a special international convention about liabilities undertaken by countries regarding security, legal and ethical aspects of AI application.
8. Risks Related to AI Application in Health System
Currently AI successfully fulfills different tasks in healthcare. It easily compares current and previous medical studies, automatically detects pathologies, accelerates diagnosis process, evaluates and controls patients’ condition, prescribes individual treatment, assists in selection of drugs and optimizes clinical trials.
AI completely fulfills the task of helping people. For example, a device named Activity Compass is intended to maximally increase the spatial orientation of a patients that has completely lost their memory. Currently, automation of special processes has led to many existing systems and moreover, testing and developing of many relevant systems.
At the same time, there are certain problems in AI-based medicine in comparison with traditional healthcare. In 2022, the market share for robotic surgery in general surgery applications was 23% . According to forecasts, By 2030, the market for surgical robots will be led by general surgery, with an 87% share. At the same time, there are many facts related to malfunction and faults of surgery robots.
Overall, problems related to AI application in medicine can be classified as following :
1. the probability of low-quality data present in the data provided for artificial intelligence training;
2. risk of misdiagnosis based on insufficient entry data;
3. higher prices in comparison with traditional tools;
4. complexity of development of models and mechanisms for AI;
5. problems related to cybersecurity and confidentiality;
6. risks of use of AI by criminal groups for malicious purposes via hacking;
7. Risks of misuse of personal data.
9. Conclusion
Conducted studies and analyses demonstrate that majority of problems and risks related to application of AI in different activity fields result from lack of relevant legal base. Issues such as liability for negative situations caused by the application of artificial intelligence, legal status of various intellectual products and objects created with the help of artificial intelligence, processing of personal data by artificial intelligence, combating fake content are among those legal problems. Artificial intelligence is a new, unprecedented field for the legal and public administration system. Therefore, the application of traditional management and regulatory mechanisms in this area is not very effective. Taking all this into account, it is necessary to develop appropriate management and regulatory mechanisms by correctly assessing the new legal realities created by artificial intelligence.
Other AI-related important problems are of economic origin. Without a doubt, effective application of AI Technologies in different fields strongly impacts development of state, economy and social security. However, AI causes significant concern from labor market and population’s employment standpoint. Risks of further increasing income division among people and further deepening of social inequality are emerging. These negative processes can occur due to mass unemployment and overall accessibility of AI-based digital technologies and deepening monopoly. At the same time, the transformation of robots into the main labor force and increase of unemployment can create a serious problem for the tax and finances of the states. Therefore, in the era of artificial intelligence, countries and the world community in general should reconsider employment policy, issues of technological monopolies and taxation concepts.
Abbreviations

AI

Artificial Intelligence

IoT

Internet of Things

Author Contributions
Rasim Mahammad Alguliyev: Conceptualization, Formal Analysis, Supervision, Validation, Project administration, Writing – review & editing
Rasim Sharif Mahmudov: Resources, Data curation, Formal Analysis, Investigation, Methodology, Writing – review & editing
Conflicts of Interest
The authors declare no conflicts of interest.
References
[1] PwC (2017). Exploiting the AI Revolution.
[2] Rodrigues, R. (2020). Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Journal of Responsible Technology, 4: 100005.
[3] Lafranconi L. (2023). Isaac Asimov’s I, Robot: Exploring the Ethics of AI Before it was Cool.
[4] Koon, Y. (2022, June 30). Risks Rise As Robotic Surgery Goes Mainstream.
[5] European Parliament (2017). European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL)).
[6] Soyer, B., & Tettenborn, A. (2023). Artificial intelligence and civil liability—do we need a new regime? International Journal of Law and Information Technology, 30: 4, 385–397.
[7] Irwin, L. (2021, December 9). The GDPR: Understanding the 6 data protection principles.
[8] Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121-154.
[9] Trovato, S. (2023, December 05). The Complete Guide to AI Transparency [6 Best Practices].
[10] Balasubramaniam, N., Kauppinen, M., Rannisto, A., Hiekkanen, K., & Kujala, S. (2023). Transparency and explainability of AI systems: From ethical guidelines to requirements. Information and Software Technology, 159: 107197.
[11] Dutta, S. & Lanvin, B. (2023). Network Readiness Index 2023, Portulans Institute.
[12] Mešević, I. R. (2023). Reevaluating Main Concepts of Intellectual Property in the Light of AI-Challenges. International Workshop on cross-cutting topics in legal studies, MELE 2023: Modernising European Legal Education (MELE), 223–233.
[13] Zhuk, A. (2023). Navigating the legal landscape of AI copyright: a comparative analysis of EU, US, and Chinese approaches. AI and Ethics.
[14] Picht, P. G., & Thouvenin, F. (2023). AI and IP: Theory to Policy and Back Again – Policy and Research Recommendations at the Intersection of Artificial Intelligence and Intellectual Property. IIC-International Review of Intellectual Property and Competition Law, 54: 916–940,
[15] U.S. Patent and Trademark Office (2020, October). Inventing AI. Tracing the diffusion of artificial intelligence with U.S. patents.
[16] Rieder, B., Sileno, G., & Gordon, G. (2021, October 1). A New AI Lexicon: Monopolization.
[17] European Parlament (2019). Economic impacts of artificial intelligence.
[18] Chui, M., Hazan, E., Roger, H., Roberts, R., Singla, A., Smaje, K., Sukharevsky, A., Yee L., & Rodney Zemmel, R. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey & Company.
[19] Boston Consulting Group (2018, April 18). AI in the Factory of the Future. The Ghost in the Machine.
[20] von Joerg, G., & Carlos, J. (2022). Design framework for the implementation of AI-based (service) business models for small and medium-sized manufacturing enterprises. Journal of the Knowledge Economy, 4: 3551–3569.
[21] Qin, Y., Xu, Z., Wang, X. & Skare, M. (2023). Artificial Intelligence and Economic Development: An Evolutionary Investigation and Systematic Review. Journal of the Knowledge Economy.
[22] Mukherjee, A. N. (2022). Application of artificial intelligence: benefits and limitations for human potential and labor-intensive economy – an empirical investigation into pandemic ridden Indian industry. Management Matters, 19: 2, 149-166.
[23] WEF (2023, April 30). Future of Jobs Report 2023.
[24] Emen, T. (2020, September 4). The Potential Economic Effects of Artificial Intelligence. TRQ.
[25] Perc, M., Ozer, M., & Hojnik, J. (2019). Social and juristic challenges of artificial intelligence. Palgrave Communications, 5: 61.
[26] Chen, Z. (2023, September 13). Ethics and discrimination in artificial intelligence-enabled recruitment practices Humanities and Social Sciences Communications. 10: 567.
[27] Bughin, J., & van Zeebroeck, N. (2018, Sep 10). 3 'AI divides' and what we can do about them.
[28] Patil, S., Vasu, V., & Srinadh, K. V. S. (2023). Advances and perspectives in collaborative robotics: a review of key technologies and emerging trends. Discover Mechanical Engineering, 2: 13.
[29] Soori, M., Arezoo, B., & Dastres, R. (2023). Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cognitive Robotics, 3(2023) 54–70,
[30] Othman, U., & Yang, E. (2023). Human–Robot Collaborations in Smart Manufacturing Environments: Review and Outlook. Sensors, 23: 5663.
[31] Magrini, E., Ferraguti, F., Ronga, A. J., Pini, F., De Luca, A., & Leali, F. (2020). Human-robot coexistence and interaction in open industrial cells. Robotics and Computer-Integrated Manufacturing, 61: 101846.
[32] Burling, M., Harris, B. J., & Schaeffer, D. (2023). Artificial Intelligence: Key Legal Issues. Practical Law,
[33] Kokotinis, G., Michalos, G., Arkouli, Z., & Makris, S. (2023). On the quantification of human-robot collaboration quality. International Journal of Computer Integrated Manufacturing, 36: 10, 1431–1448.
[34] Daniel, M. (2022). Optimizing Decision-Making for Human-Robot Collaboration. Th`ese de Doctorat de l’Universit´e Clermont Auvergne en Electronique et Syst`emes.
[35] Mutabazi, P. (2023). What is Deepfake Technology? May 1, 2023,
[36] Chee, F. Y. (2023, June 5). AI generated content should be labelled, EU Commissioner Jourova says.
[37] Nishimura, A (2023, November 2). Human Subjects Protection in the Era of Deepfakes.
[38] Pearson, J., & Zinets, N. (2022, March 17). Deepfake footage purports to show Ukrainian president capitulating.
[39] Birchard, R. (2023, May 6). AI content: EU asks Big Tech to tackle disinformation.
[40] Fitri, A. (2023, January 10). China has just implemented one of the world’s strictest laws on deepfakes.
[41] Halm, K. C., Kumar, A., Segal, J., & Kalinowski, IV C. (2019, October 14). Two New California Laws Tackle Deepfake Videos in Politics and Porn.
[42] Xuf, Y. (2023, September 21). French Content Moderation and Platform Liability Policies.
[43] van der Sloot, B., & Wagensveld, Y. (2022). Deepfakes: regulatory challenges for the synthetic society. Computer Law & Security Review, 46: 105716,
[44] Aïmeur, E., Amri, S., & Brassard, G. (2023) Fake news, disinformation and misinformation in social media: a review. Social Network Analysis and Mining, 13: 30.
[45] Christie, E. H., Ertan, A., Adomaitis, L. & Klaus, M. (2023). Regulating lethal autonomous weapon systems: exploring the challenges of explainability and traceability. AI and Ethics.
[46] Strategic Market Research (2023). Top Robotic Surgery Statistics to Follow in 2023.
[47] Sunarti, S., Rahman, F. F., Naufal, M., Risky, M., Febriyanto, K., & Masnina R. (2021). Artificial intelligence in healthcare: opportunities and risk for future. Gaceta Sanitaria, 35: 1, S67-S70.
[48] Manne, R., & Kantheti, S. C. (2021). Application of Artificial Intelligence in Healthcare: Chances and Challenges. Current Journal of Applied Science and Technology, 40: 6.
Cite This Article
  • APA Style

    Alguliyev, R. M., Mahmudov, R. S. (2024). About Some Socio-economic Problems and Risks of Artificial Intelligence. International Journal of Science, Technology and Society, 12(5), 140-150. https://doi.org/10.11648/j.ijsts.20241205.11

    Copy | Download

    ACS Style

    Alguliyev, R. M.; Mahmudov, R. S. About Some Socio-economic Problems and Risks of Artificial Intelligence. Int. J. Sci. Technol. Soc. 2024, 12(5), 140-150. doi: 10.11648/j.ijsts.20241205.11

    Copy | Download

    AMA Style

    Alguliyev RM, Mahmudov RS. About Some Socio-economic Problems and Risks of Artificial Intelligence. Int J Sci Technol Soc. 2024;12(5):140-150. doi: 10.11648/j.ijsts.20241205.11

    Copy | Download

  • @article{10.11648/j.ijsts.20241205.11,
      author = {Rasim Mahammad Alguliyev and Rasim Sharif Mahmudov},
      title = {About Some Socio-economic Problems and Risks of Artificial Intelligence
    },
      journal = {International Journal of Science, Technology and Society},
      volume = {12},
      number = {5},
      pages = {140-150},
      doi = {10.11648/j.ijsts.20241205.11},
      url = {https://doi.org/10.11648/j.ijsts.20241205.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijsts.20241205.11},
      abstract = {Article analyses some socio-economic risks related to application of artificial intelligence (AI) in several fields of activity. Also, existing gaps in legal regulation of activities related to artificial intelligence are investigated. Article clarifies issues related to determining the division of liability for certain legal consequences resulting from artificial intelligence activity. Also, norms and principles to be adhered to in order to protect personal data during application of AI are demonstrated. As one of the concerns among people regarding artificial intelligence, article notes the importance of provision of transparence and accountability of this technology. Simultaneously, article interprets problems arising from relations of artificial intelligence and intellectual property, as well as recognition of property rights for intellectual products created via AI. Also, macro and micro-level impact of artificial intelligence on economy is analyzed. Attention is paid to issues such as productivity, competition, changes in the nature of the labor market, the increase in unemployment, and the deepening of social and digital inequality as a result of the application of this technology. Moreover, advantages and risks of human-robot collaboration are evaluated. Article demonstrates the biggest threats of artificial intelligence – creation of fake content, misinformation and hence, creation of significant problems. Prevention methods of those threats are interpreted on technological and legal planes. Also, risks of application of artificial intelligence in critical fields such as military and health are characterized.
    },
     year = {2024}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - About Some Socio-economic Problems and Risks of Artificial Intelligence
    
    AU  - Rasim Mahammad Alguliyev
    AU  - Rasim Sharif Mahmudov
    Y1  - 2024/09/11
    PY  - 2024
    N1  - https://doi.org/10.11648/j.ijsts.20241205.11
    DO  - 10.11648/j.ijsts.20241205.11
    T2  - International Journal of Science, Technology and Society
    JF  - International Journal of Science, Technology and Society
    JO  - International Journal of Science, Technology and Society
    SP  - 140
    EP  - 150
    PB  - Science Publishing Group
    SN  - 2330-7420
    UR  - https://doi.org/10.11648/j.ijsts.20241205.11
    AB  - Article analyses some socio-economic risks related to application of artificial intelligence (AI) in several fields of activity. Also, existing gaps in legal regulation of activities related to artificial intelligence are investigated. Article clarifies issues related to determining the division of liability for certain legal consequences resulting from artificial intelligence activity. Also, norms and principles to be adhered to in order to protect personal data during application of AI are demonstrated. As one of the concerns among people regarding artificial intelligence, article notes the importance of provision of transparence and accountability of this technology. Simultaneously, article interprets problems arising from relations of artificial intelligence and intellectual property, as well as recognition of property rights for intellectual products created via AI. Also, macro and micro-level impact of artificial intelligence on economy is analyzed. Attention is paid to issues such as productivity, competition, changes in the nature of the labor market, the increase in unemployment, and the deepening of social and digital inequality as a result of the application of this technology. Moreover, advantages and risks of human-robot collaboration are evaluated. Article demonstrates the biggest threats of artificial intelligence – creation of fake content, misinformation and hence, creation of significant problems. Prevention methods of those threats are interpreted on technological and legal planes. Also, risks of application of artificial intelligence in critical fields such as military and health are characterized.
    
    VL  - 12
    IS  - 5
    ER  - 

    Copy | Download

Author Information