1 Department of Science in Business Analytic, Trine University, Angola, IN 46703, USA
2 Department of Electronics and Communications Engineering, East West University, Jahurul Islam Avenue Jahurul Islam City, Aftabnagar, Dhaka-1212, Bangladesh
3 Department of Law, Stamford University Bangladesh, 51 Siddeswari Road (Ramna), Dhaka-1217, Bangladesh
4 Department of Law, University of Derby (London), Kedleston Rd, Derby DE22,1gb UK
Although AI is a helpful tool for the evolving business environment, its uncontrolled and fast integration into business processes can bring legal and ethical challenges. This research paper delves into the legal and ethical considerations essential for effectively regulating AI in business environments. As AI systems increasingly influence decision-making processes and operational efficiencies, the need for comprehensive and forward-thinking regulation becomes imperative. Thus, in the aspects of ethics, algorithms are utilized to protect rights or prohibit discrimination and ensure the openness of AI’s decisions. This paper examines key legal issues such as data privacy, intellectual property rights, and liability. For example, while the General Data Protection Regulation (GDPR) provides a framework for data protection, it remains unclear how these provisions apply to AI's sophisticated data handling and automated decision-making processes.. Ethical considerations further complicate the regulatory landscape. AI systems can inadvertently perpetuate biases present in their training data, leading to discriminatory practices and fairness concerns. This paper explores the ethical implications of such biases and the need for transparency in AI algorithms to ensure that they operate in a manner consistent with societal values. To address these challenges, the paper proposes a regulatory approach. It advocates for the modernization of existing legal frameworks to better encompass AI-related issues, and the establishment of ethical standards to guide AI deployment. Indeed, this article will explore some legal and ethical aspects of using Artificial Intelligence in the business world. Ethical AI guidelines and policies will be discussed, annual checkers will be conducted, and ethical AI standards will be promoted. In addition, identifying potential opportunities and threats likely to emerge when AI is even more entrenched in organizational activities will be relevant. After reviewing this article, you should know how to harness AI's power while avoiding legal issues.
DOI: https://doi.org/10.103/xxx @ 2024 C5K Research Publishing
AI refers to the creation of machines, especially computer systems, that mimic the human mind in perceiving, analyzing, and deciding. It is also a process through which machines and computers can learn from historical data and make decisions on their own without consulting humans. The advent of a new industrial revolution is signaled by innovations that present both new obstacles and significant rewards (Tourk & Marsh, 2016) . In recent years, the rapid development and deployment of artificial intelligence (AI) technologies have profoundly transformed various sectors, including business. The amount spent globally on AI and cognitive systems increased from $43.81 billion in 2018 to $93.5 billion in 2021 (Loureiro et al., 2021; Zhang et al., 2021) . AI's capabilities, from machine learning and natural language processing to autonomous decisionmaking, have unlocked new opportunities for innovation and efficiency. However, these advancements have also raised significant legal and ethical challenges that necessitate a thoughtful and comprehensive approach to regulation. The fact that modern technologies have made it easier for people to harm other people—even machines themselves—means that we must take this into account (Letheren et al., 2020) . As businesses increasingly integrate AI into their operations, it is imperative to address the complexities surrounding its use to ensure that it benefits society while mitigating potential risks.
The intersection of AI and business brings forth a myriad of regulatory and ethical issues that are yet to be fully addressed. AI systems, capable of analyzing vast amounts of data and making autonomous decisions, present challenges related to accountability, transparency, privacy, and fairness. Regulatory frameworks must evolve to keep pace with technological advancements, balancing the need for innovation with the protection of fundamental rights and societal values. The legal landscape for AI in business is multifaceted and varies significantly across jurisdictions. Key areas of concern include data protection, intellectual property, liability, and compliance with existing regulations. For instance, the General Data Protection Regulation (GDPR) in the European Union has established stringent requirements for data privacy and security, impacting how AI systems collect, store, and process personal information (Regulation (EU) 2016/679). Similarly, the California Consumer Privacy Act (CCPA) has introduced significant provisions affecting AI-driven data practices in the United States (California Civil Code § 1798.100 et seq.).Liability for AI-driven decisions is another critical legal issue. Determining responsibility when AI systems cause harm or operate in unintended ways presents challenges for traditional legal frameworks. Courts and legislators must grapple with questions of product liability, negligence, and the allocation of risk between developers, users, and other stakeholders (Wong, 2020). Furthermore, the protection of intellectual property rights in the context of AIgenerated innovations raises questions about patentability and ownership (Fatima, 2022) .
The ethical implications of AI in business are equally pressing. The use of AI systems often involves decisions that can impact individuals' lives, such as in hiring, lending, or law enforcement. These applications raise concerns about fairness, bias, and discrimination. AI systems can perpetuate and even exacerbate existing biases if they are trained on biased data or if their algorithms are not designed to account for ethical considerations (O'neil, 2017) .Transparency in AI decision-making is crucial to maintaining trust and accountability. The "black box" nature of many AI systems, where the decision-making process is opaque, complicates efforts to understand and rectify erroneous or biased outcomes (Zachary, 2016) . To address these challenges, researchers and policymakers advocate for explainable AI (XAI) techniques that aim to make AI systems' operations more interpretable and accountable (Doshi-Velez & Kim, 2017) .Furthermore, the deployment of AI raises questions about the ethical use of surveillance and data collection. Businesses must navigate the delicate balance between leveraging data for competitive advantage and respecting individuals' privacy rights (Tufekci, 2014) . Ethical guidelines and industry standards are essential for ensuring that AI technologies are used responsibly and that their benefits are distributed equitably.
In response to these legal and ethical challenges, various regulatory approaches and frameworks have been proposed and implemented. For example, the European Commission's proposed Artificial Intelligence Act aims to establish a comprehensive regulatory framework for AI, categorizing AI applications based on their risk levels and imposing corresponding requirements (DOWN & ACT, 2021) . Similarly, the OECD has developed principles for responsible AI that emphasize the need for transparency, accountability, and humancentered values (OECD, 2019) (OECD, 2019).The role of industry self-regulation and voluntary standards also plays a significant part in addressing AI's challenges. Initiatives such as the IEEE's Ethically Aligned Design and the Partnership on AI aim to develop guidelines and best practices for ethical AI development and deployment (Alvarez et al., 2016) . These efforts highlight the importance of collaborative approaches involving multiple stakeholders, including policymakers, businesses, researchers, and civil society. The regulation of AI in business is a complex and evolving field that requires careful consideration of both legal and ethical dimensions. As AI technologies continue to advance and integrate into various business practices, it is crucial to develop robust regulatory frameworks and ethical guidelines that address the challenges and opportunities presented by these technologies. By fostering a balanced approach that promotes innovation while safeguarding fundamental rights and societal values, stakeholders can ensure that AI serves as a force for positive change in the business world.
The advent of artificial intelligence (AI) has significantly reshaped the business landscape, offering unprecedented opportunities for efficiency and innovation. However, it has also introduced complex legal and ethical challenges that demand rigorous examination. Ngozi Samuel Uzougbo , Chinonso Gladys Ikegwu and Adefolake Olachi Adewusi in their research paper stated that ,Artificial Intelligence (AI) is transforming the financial services industry by enhancing efficiency, innovation, and personalization. However, this advancement brings significant legal and ethical challenges. This paper explores the legal accountability associated with AI in financial services, focusing on the allocation of responsibility for AI errors and regulatory violations. It evaluates existing legal frameworks, including data protection and consumer protection laws, for their effectiveness in addressing AIrelated issues. Ethical concerns, such as algorithmic bias, transparency, and fairness, are also critical, as they affect individuals' financial well-being and access to services. The paper discusses the need for ethical guidelines and robust frameworks to ensure responsible AI use. It also reviews the role of regulatory bodies and industry standards in managing these challenges and offers recommendations for clear guidelines, improved transparency, and accountability. Overall, the paper aims to ensure that AI is utilized responsibly and ethically in financial services, benefiting both businesses and consumers.(Uzougbo et al., 2024) Another researcher Corinne Cath in his research paper introduces the special issue titled "Governing Artificial Intelligence: Ethical, Legal, and Technical Opportunities and Challenges." As AI increasingly influences various sectors—from critical areas like urban infrastructure, law enforcement, and healthcare to everyday applications like dating—the need to ensure AI is accountable, fair, and transparent becomes more pressing. This issue features eight in-depth analyses exploring the ethical, legal-regulatory, and technical challenges of developing effective governance frameworks for AI systems. It reviews recent advancements in AI governance, highlighting current agendas for AI regulation, ethical frameworks, and technical strategies. The paper aims to stimulate further discussion on AI governance by presenting concrete suggestions for advancing the regulatory and ethical oversight of AI technologies.(Cath, 2018)
Marco Tulio Daza and Usochi Joanann Ilozumba published another systematic review in 2022 where they stated that Artificial intelligence is rapidly transforming business practices, offering significant benefits but also raising serious ethical concerns. AI systems can make autonomous decisions and impact users and the environment, leading to various ethical challenges. This study examines the ethics of AI in business by analyzing articles from key business journals published up to mid2021. It identifies the most influential journals, articles, and authors, and highlights the predominant ethical frameworks and issues associated with AI in business. The study outlines the current state of the field, maps out emerging trends, and provides insights into the evolving ethical landscape of AI in business.(Daza & Ilozumba, 2022)
Legal Frameworks for AI Regulation: The legal regulation of AI in business is a burgeoning field, with significant discourse centered around data protection, intellectual property, and liability. The General Data Protection Regulation (GDPR) in the European Union represents a pioneering legal framework in the context of AI. GDPR imposes stringent requirements on data collection, processing, and user consent, significantly impacting AI systems that rely on vast amounts of personal data (Regulation (EU) 2016/679). Researchers such as Veale and Binns (2017) argue that GDPR provides a robust foundation for data protection but also highlight the challenges of balancing privacy with the needs of AI innovation.(Veale & Binns, 2017)The California Consumer Privacy Act (CCPA) extends similar privacy protections to consumers in the U.S., introducing new considerations for businesses employing AI technologies (California Civil Code § 1798.100 et seq.). However, as noted by Schaub et al. (2020), the CCPA’s broad scope and complex compliance requirements can pose significant challenges for AI-driven enterprises, particularly those involved in large-scale data processing.(Utz et al., 2022)
Intellectual property rights in the context of AI present additional complexities. Traditional IP frameworks, such as patents and copyrights, struggle to address the nuances of AI-generated inventions and creations. Berriman (2020) discusses how current patent laws may need to adapt to accommodate AI-driven innovations, suggesting that AI's role as an inventor challenges conventional notions of authorship and ownership. The intersection of AI with IP rights remains a dynamic area of legal scholarship, with ongoing debates about the adequacy of existing frameworks .Liability for AIdriven decisions represents another critical legal issue.(Abbott et al., 2024) The allocation of responsibility when AI systems cause harm or operate in unintended ways complicates traditional liability models. Herrmann (2023) examines the limitations of existing product liability frameworks in addressing AIrelated harm, advocating for new approaches that better align with the autonomous nature of AI systems.(Herrmann & Cameron, 2023) This includes exploring hybrid models that distribute liability among developers, users, and other stakeholders.
Ethical Implications of AI in Business: The ethical dimensions of AI in business encompass concerns about fairness, transparency, and the responsible use of technology. One of the most prominent issues is algorithmic bias. AI systems can perpetuate and even exacerbate existing biases if they are trained on biased data or lack appropriate safeguards (O'Neil, 2016). Barocas and Selbst (2016) highlight how bias in AI algorithms can lead to discriminatory outcomes, stressing the importance of implementing fairnessaware algorithms and conducting rigorous bias audits. Transparency and explain-ability in AI decisionmaking are crucial for maintaining public trust and accountability.(Barocas & Selbst, 2016; O'neil, 2017) The "black box" nature of many AI systems, where the decision-making process is opaque, poses significant challenges for users and regulators (Lipton, 2016). Doshi-Velez and Kim (2017) propose the development of explainable AI (XAI) techniques to enhance interpretability and provide stakeholders with meaningful insights into AI decision-making processes.(Doshi-Velez & Kim, 2017; Zachary, 2016) These approaches aim to bridge the gap between complex algorithms and human understanding, promoting transparency and trust. The ethical use of surveillance and data collection by AI systems also raises important concerns. Tufekci (2014) discusses how businesses' use of AI-driven surveillance technologies can infringe on privacy rights and lead to intrusive data practices.(Tufekci, 2014) The need for ethical guidelines and industry standards to govern the responsible use of AI technologies is emphasized by several researchers. For instance, the IEEE’s Ethically Aligned Design (2019) provides a framework for prioritizing human well-being and ethical considerations in AI development.(Schiff et al., 2020)
Emerging Regulatory Approaches: Recent developments in AI regulation reflect an evolving understanding of the technology's implications and the need for comprehensive oversight. The European Commission's proposed Artificial Intelligence Act (2021) represents a significant regulatory initiative aimed at addressing the risks associated with AI applications. The Act categorizes AI systems based on their risk levels, imposing varying requirements for transparency, accountability, and oversight.(DOWN & ACT, 2021) According to Kroll et al. (2022), this riskbased approach is intended to balance innovation with safety, although its effectiveness will depend on careful implementation and enforcement.(Kroll & Berzins, 2022)The Organization for Economic Co-operation and Development (OECD) has also contributed to the discourse with its principles for responsible AI (OECD, 2019). These principles emphasize the importance of transparency, accountability, and human-centered values in AI development. The OECD's framework provides a foundational basis for creating regulatory and ethical guidelines that align with broader societal goals.(OECD, 2019) Industry self-regulation and voluntary standards play a crucial role in complementing formal regulatory efforts. Initiatives such as the Partnership on AI (2020) and the IEEE's Ethically Aligned Design (2019) focus on developing best practices and ethical guidelines for AI technologies. (Schiff et al., 2020) These efforts highlight the importance of collaboration between stakeholders, including businesses, policymakers, and civil society, in promoting responsible AI development.
Regulating AI in business underscores the complex interplay between legal and ethical considerations. Legal frameworks such as GDPR and CCPA address data protection but face challenges in adapting to AI's evolving nature. Intellectual property and liability issues further complicate the regulatory landscape. The aim of this research paper is to critically examine the current legal and ethical frameworks governing the use of artificial intelligence (AI) in business. The paper seeks to identify and analyze key regulatory challenges and ethical dilemmas associated with AI deployment in business contexts. By evaluating existing laws and guidelines, as well as exploring emerging issues such as algorithmic bias, transparency, and accountability, the paper aims to provide comprehensive recommendations for developing robust regulatory measures and ethical standards. Ultimately, the research will contribute to the creation of balanced approaches that ensure AI technologies are used responsibly, fostering innovation while protecting individual rights and promoting fairness. However, ongoing research and collaboration are essential for developing effective and adaptable regulations that balance innovation with societal values.
The regulation of artificial intelligence (AI) in business is a complex issue that intersects with various legal and ethical dimensions. This research paper seeks to explore and analyze the legal and ethical considerations involved in regulating AI technologies within business contexts. The research methodology combines qualitative and quantitative approaches to provide a thorough understanding of existing frameworks, identify challenges, and propose practical recommendations.Flow chart:
Research question:
Data assembly: Identify and select relevant academic literature, including peer-reviewed journals, books, conference papers, and policy reports. Sources will focus on topics such as AI ethics, data protection laws, AI regulations, and business practices involving AI. Use databases like JSTOR, Google Scholar, and institutional repositories to gather primary and secondary sources. Emphasis will be placed on recent publications to ensure the inclusion of the latest developments in AI regulation.
Fig. 1. Influence and productivity of academic journals.
The results in Fig. 1 showed that: The disparity in citation counts between the Journal of Business Ethics (1,072 citations) and a paper from the Electronic Markets journal (94 citations) highlights the broader impact and influence of ethics-focused research on AI in business. The high citation number for the Journal of Business Ethics indicates its papers address widely relevant and critical issues regarding AI's ethical implications, garnering significant attention from the academic community. In contrast, the lower citation count for the Electronic Markets paper suggests it covers a more specialized or emerging topic within digital markets, resulting in a narrower but still valuable audience.
Data Analysis: The research method for this paper combines comparative analysis, case studies, expert interviews, and policy analysis to provide a comprehensive examination of the legal and ethical considerations in regulating AI in business. By integrating multiple approaches, this study aims to offer a thorough understanding of existing frameworks, highlight challenges, and propose actionable improvements for effective AI regulation and ethical oversight.
This research provide a nuanced understanding of the legal and ethical considerations in regulating AI in business. By highlighting the strengths and limitations of current frameworks, the study offers valuable insights and recommendations for policymakers and businesses to navigate the complex landscape of AI regulation and ethics effectively.
Fig. 2. Global total corporation Artificial intelligence investment from 2015-2022
Source: AI Index Report Stanford University (2023)
The figure-2 illustrates A dramatic increase in AI investment from 2015 to 2021. Starting at $12.75 billion in 2015, investments soared to $93.5 billion by 2021. This surge highlights the growing recognition of AI's potential, driven by rapid technological advancements, increased adoption across various industries, and heightened competition to lead in AI innovation. The trend indicates a strong commitment to developing AI capabilities and reflects its rising strategic importance in the global technology landscape.
Increased investment in artificial intelligence raises risks due to potential misuse, ethical concerns, and unintended consequences, such as bias and security vulnerabilities, requiring careful management and regulation.
Fig. 3. Global annual number of reported artificial intelligence incidents and controversies
The results in Fig. 3 showed that: Despite a significant increase in crime from 8 incidents in 2012 to 123 in 2023, artificial intelligence is playing a crucial role in crime prevention and management. AI is used in predictive policing to forecast high-risk areas, in surveillance systems for real-time monitoring and alerting, and in data analysis to uncover crime patterns and connections. These AI applications help law enforcement respond more effectively and potentially reduce crime rates by identifying and addressing issues before they escalate.
AI incidents, such as biased decision-making, privacy breaches, and security vulnerabilities, can have significant negative impacts on individuals and organizations. These issues arise when AI systems behave unpredictably or harmfully due to flaws in their design or implementation. To prevent such incidents, it is essential to adopt a multi-faceted approach. Firstly, integrating ethical principles into AI development ensures that systems are designed with fairness and transparency at their core. Additionally, implementing strong security measures and conducting frequent vulnerability assessments protect AI systems from adversarial attacks and unauthorized access. Continuous monitoring of AI systems allows for real-time detection and resolution of issues, ensuring they operate correctly and ethically.
Legal Frameworks for Regulating AI: As AI integration in commerce grows, the demand for comprehensive legal frameworks to address emerging issues is increasing. Existing laws such as the EU's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) focus on privacy and data protection in AI applications. Industry-specific regulations, like the Health Insurance Portability and Accountability Act (HIPAA) in the U.S., also guide AI use in sectors such as healthcare. To keep pace with AI advancements, new legislation is being proposed. The EU's AI Act aims to establish a robust legal framework, categorizing AI systems by risk and imposing stringent requirements on high-risk areas like infrastructure and healthcare. This includes mandates on risk assessment, operational transparency, and human oversight, with significant penalties for non-compliance. Internationally, regulatory approaches vary. The U.S. prefers sector-specific regulation, while the EU favors risk-based frameworks. China has implemented stringent AI laws, particularly concerning national security. These diverse approaches create complexities for global companies navigating multi-jurisdictional regulations.
General AI Regulation: General AI regulation addresses the overarching need to manage the deployment and impact of AI technologies across various sectors. As AI systems become increasingly integral to business operations, establishing a coherent legal framework is crucial for ensuring their responsible use. Existing regulations, such as the EU's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), provide foundational guidelines for data privacy but do not fully encompass the complexities of AI's broader applications. The proposed EU AI Act represents a significant step forward by introducing a risk-based approach to AI regulation. This Act categorizes AI systems based on their risk level, imposing stringent requirements on high-risk applications, including transparency, accountability, and human oversight. Such measures are vital for mitigating potential harms and ensuring that AI systems operate safely and ethically.
Data Privacy: Data privacy is a critical aspect of regulating artificial intelligence, given that AI systems often rely on vast amounts of personal data to function effectively. Existing frameworks like the EU's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) set essential standards for data protection, emphasizing transparency, consent, and data subject rights. These regulations aim to ensure that individuals' personal information is handled responsibly and that businesses are accountable for data breaches.
Intellectual Property: Intellectual property (IP) in the context of artificial intelligence (AI) presents both opportunities and challenges as businesses seek to protect their innovations while fostering a competitive market. Current IP laws, including patents and copyrights, offer mechanisms for safeguarding AI technologies, algorithms, and software. These protections incentivize innovation by ensuring that creators can secure exclusive rights to their inventions and prevent unauthorized use. However, the rapid advancement of AI technology raises unique IP issues. For instance, the algorithms and models driving AI systems can be complex and difficult to patent, and traditional IP frameworks may struggle to address the rapid pace of technological change.
Sector-Specific Regulations: Sector-specific regulations play a crucial role in addressing the unique challenges and risks associated with AI applications in different industries. While broad AI regulations provide a general framework, sector-specific guidelines ensure that the nuances of various fields—such as healthcare, finance, and transportation—are effectively managed. For example, the Health Insurance Portability and Accountability Act (HIPAA) in the U.S. governs the use of AI in healthcare, emphasizing the need for stringent data privacy and security measures. Similarly, financial regulations guide the use of AI in trading and risk management to prevent market manipulation and ensure financial stability. These tailored regulations help address industry-specific risks and operational complexities, fostering trust and ensuring compliance with standards that protect consumers and stakeholders.
International Regulations: International regulations for artificial intelligence (AI) are essential for managing the global implications of AI technologies while addressing diverse legal and cultural contexts. As AI systems cross borders, the need for cohesive international standards becomes critical to prevent regulatory fragmentation and ensure consistent protection across jurisdictions. Currently, approaches vary widely: the European Union emphasizes a risk-based framework with stringent requirements for high-risk AI applications, while the United States adopts a more sector-specific approach, focusing on particular industries rather than a comprehensive national policy. China, conversely, implements strict regulations with a focus on national security and social stability. These disparate approaches create challenges for multinational businesses that must navigate multiple regulatory environments.
Table 1. Legal Frameworks for Regulating Artificial Intelligence in Business
Ethical Frameworks: Artificial Intelligence (AI) presents transformative opportunities across various sectors, but its rapid advancement also raises significant ethical concerns. As AI systems become more integrated into daily life and business operations, addressing these ethical considerations is crucial to ensure they are developed and used responsibly.
Fig. 4. Main ethical issues of AI in business
The results in Fig. 4 showed that: the primary concerns raised by the literature on corporate AI ethics. It's arranged into five categories that we developed by examining the issues, worries, and principles that arose from the primary debates. The primary moral questions surrounding AI in business can be divided into five categories: First and foremost, the ethical principle; second, openness, privacy, and trust; third, prejudice, inclinations, and fairness; fourth, employment and automation; and fifth, social media, involvement, and democracy.
Ethical Principle: These articles delve into the different levels of AI intelligence Artificial Narrow Intelligence (ANI), which excels in specific tasks, and Artificial General Intelligence (AGI), which could theoretically handle any problem but does not yet exist. AGI, if achieved, might lead to Artificial Super Intelligence (ASI), a hypothetical state of self-aware systems with advanced creativity and wisdom. Currently, AI lacks self-awareness and moral agency, so humans remain accountable for its impacts.. Ethical concerns include AI biases and the risks of overreliance on decision support systems, which can lead to discrimination and unfairness.
Transparency, privacy, and trust: Still, one of the problems inherent in most AI solutions is that data is harvested from public sources, which may lead to privacy concerns and data leakage. Any company implementing AI must respect data privacy laws from various countries, such as(GDPR) the General Data Protection Regulation, or the California Consumer Privacy Act, regarding user data. This means that data collection should be minimized, data needs to be secure, and the user should have complete control over the data It is mandatory to protect the customer's data, but at the same time, it is a crucial factor that helps gain customer trust.
Bias, preferences, and justice: Another ethical concern governments and companies face when using AI is system or method bias. Some of the main categories of bias that could be witnessed include skewed training and algorithm-related complications. For instance, recruitment through technologies developed from artificial intelligence entails discriminating against some populations if the training data discriminates against them. To avoid bias, organizations need to work on the following measures: generating the training data using different sources, auditing the AI system, and using AI fairness metrics. Minimizing bias and concerns of bias is critical to maintaining the public's trust in the developed artificial intelligence system.
Employment and automation: The regulation of AI in employment and automation presents both critical challenges and opportunities. While AI-driven automation can enhance efficiency and innovation, it also poses significant risks to job security and economic equality. Current frameworks, such as the EU's Digital Services Act and US labor guidelines, seek to address these concerns by promoting responsible automation practices and supporting reskilling efforts. However, these measures alone may not be sufficient to counteract the broader socioeconomic impacts of automation.. Balancing technological advancement with fair labor practices remains a complex challenge requiring coordinated efforts from policymakers, businesses, and educational institutions.
Social media, participation, and democracy: The intersection of AI with social media and democratic processes highlights profound legal and ethical concerns. AI-driven algorithms increasingly shape content visibility and user engagement on social media platforms, raising issues of transparency and bias. Current regulations, such as the EU Digital Services Act, aim to enhance algorithmic transparency and protect user rights, yet significant challenges remain in managing misinformation and ensuring fair representation. Ethically, the influence of AI on democratic processes—such as political advertising and public opinion shaping—requires careful oversight to prevent manipulation and ensure electoral integrity.
Table 2. Ethical Frameworks for Regulating Artificial Intelligence in Busines
It is essential to note that the application of AI integration in business lines is still a relatively promising area but highly regulated. As time passes, the application of related technologies, corporate legitimacy, and ethicality are increasingly considered. The measures for tackling these risks include encouraging the ethical use of artificial intelligence, reviewing risks, informing employees, and collaborating with regulators and other stakeholders. Furthermore, for their future vision, they need to anticipate the emergence of new legal concepts, new technologies getting into the scene, and efforts for a global harmonization of AI legislation. This way, such measures are introduced tactfully, and the deployed AI systems work within the stipulated legal and ethical standards while promoting fairness, transparency, and accountability. In other words, effectively regulating AI will entail a measure of openness that different organizations can adopt to take their development forward in a manner that ensures the rights and welfare of every citizen are safeguarded.
Since recently AI technology rapidly advances, future legal and ethical concerns are emerging that companies must address. The evolving legal landscape includes stricter requirements for data protection, explain-ability, and accountability, exemplified by the EU's AI Act, which categorizes AI systems based on risk levels. Technological advancements, such as deep learning and generative AI, introduce new challenges, including AI bias and data privacy issues, necessitating ongoing updates to legislation and continuous legal consultation for businesses. Additionally, while a global AI regulatory model is challenging due to differing legal and cultural norms, efforts by organizations like the OECD and UNESCO aim to establish universal standards. Multinational companies need adaptable AI governance frameworks that align with both local regulations and global standards, facilitating responsible and ethical AI use internationally.
Abbott, F. M., Cottier, T., Gurry, F., Abbott, R. B., Burri, M., Ruse-Khan, H. G., & McCann, M. (2024). International Intellectual Property in an Integrated World Economy:[Connected EBook]. Aspen Publishing.
Alvarez, M., Bielby, J., & Havens, J. (2016). ETHICALLY ALIGNED DESIGN: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems. ETHICALLY ALIGNED DESIGN A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.
Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. Calif. L. Rev., 104, 671.
Cath, C. (2018). Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080.
Daza, M. T., & Ilozumba, U. J. (2022). A survey of AI ethics in business literature: Maps and trends between 2000 and 2021. Frontiers in Psychology, 13, 1042661.
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
DOWN, L., & ACT, I. (2021). Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.
Fatima, S. (2022). The Impact Of Artificial Intelligence On Intellectual Property Laws.
Herrmann, H., & Cameron, R. (2023). Responsible mixed methods research (RMMR): a case for managing ethics and AI in MMR. In Handbook of Mixed Methods Research in Business and Management (pp. 55-75). Edward Elgar Publishing.
Kroll, J. A., & Berzins, V. (2022). Understanding, Assessing, and Mitigating Safety Risks in Artificial Intelligence Systems.
Letheren, K., Russell-Bennett, R., & Whittaker, L. (2020). Black, white or grey magic? Our future with artificial intelligence. Journal of Marketing Management, 36(3-4), 216-232.
Loureiro, S. M. C., Guerreiro, J., & Tussyadiah, I. (2021). Artificial intelligence in business: State of the art and future research agenda. Journal of business research, 129, 911-926.
O'neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. OECD. (2019). OECD principles on artificial intelligence. In: OECD Paris,
France. Schiff, D., Ayesh, A., Musikanski, L., & Havens, J. C. (2020). IEEE 7010: A new standard for assessing the well-being implications of artificial intelligence. 2020 IEEE international conference on systems, man, and cybernetics (SMC),
Tourk, K., & Marsh, P. (2016). The new industrial revolution and industrial upgrading in China: achievements and challenges. Economic and Political Studies, 4(2), 187- 209.
Tufekci, Z. (2014). Engineering the public: Big data, surveillance and computational politics. First Monday.
Utz, C., Amft, S., Degeling, M., Holz, T., Fahl, S., & Schaub, F. (2022). Privacy rarely considered: Exploring considerations in the adoption of third-party services by websites. arXiv preprint arXiv:2203.11387.
Uzougbo, N. S., Ikegwu, C. G., & Adewusi, A. O. (2024). Legal accountability and ethical considerations of AI in financial services. GSC Advanced Research and Reviews, 19(2), 130-142.
Veale, M., & Binns, R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society, 4(2), 2053951717743530.
Wong, A. (2020). The laws and regulation of AI and autonomous systems. Unimagined futures–ICT opportunities and challenges, 38-54.
Zachary, C. L. (2016). The mythos of model interpretability. Communications of the ACM, 1-6.
Zhang, D., Mishra, S., Brynjolfsson, E., Etchemendy, J., Ganguli, D., Grosz, B., Lyons, T., Manyika, J., Niebles, J. C., & Sellitto, M. (2021). The AI index 2021 annual report. arXiv preprint arXiv:2103.06312.