1 Department of Business Administration, International American University, Los Angeles, CA 90010, USA
Blockchain Individualized customer information is at the heart of online commerce. Using increasing amounts of customer-specific data enhances the success and value of one-to-one online marketing, but the extensive gathering and use of data specific to individuals also causes alarm over the loss of digital privacy, setting up a confrontation between e-commerce and society. Governments and nations, particularly in Europe, have reacted with a reliance on sweeping laws governing digital privacy protection. In contrast, other nations, such as the U.S., have generally preferred to allow companies and industry associations to regulate themselves. This tenuous balance is under attack from both sides. In this paper, we set up a framework that incorporates the environmental context, ethical perspectives, and firm-specific considerations to help firms develop a strategy for handling digital privacy concerns.
DOI: https://doi.org/10.63471/amlids24003 @ 2024 Advances in Machine Learning, IoT and Data (AMLID), C5K Research Publication
The pervasive nature of digital technology in our everyday lives, its use, and its influence on both companies and individuals, give rise to ethical concerns regarding its societal role. The concerns encompass issues such as consent and privacy, security, inclusion and fairness, protection against online danger, transparency, and accountability (Allahrakha, 2023; Ngoepe et al., 2010). Prominent instances encompass the Cambridge Analytica controversy and apprehensions regarding racial bias in the development of facial recognition systems. Considering ethical considerations is crucial to maximize the societal impact of digital technology. The level of ethical development is primarily contingent upon the specific characteristics of the technology in question (Sarathy & Robertson, 2003).
The ethical considerations surrounding blockchain and cloud services are mostly being explored in academic studies, whereas the ethical implications of technologies like smartphones, robots, and artificial intelligence are more commonly addressed through recommendations from governments or self-regulation by companies (Kushwaha et al., 2016). Various legislative frameworks already exist or are being developed to address platforms, such as the Digital Services Legislation in the European Union. These frameworks also include related topics like data privacy (e.g. GDPR) and online damages (e.g. NetzDG in Germany). This study covers the ethical considerations associated with several types of digital technology. It explores the measures taken by the international community and provides an overview of the tools implemented by private firms. The text proposes a balanced strategy that considers both the communal benefit of ethics in business and the individual accountability of users (Stahl et al., 2014).
Ensuring transparency and clarity on the fundamental moral ideals and views is a crucial prerequisite for the development and implementation of an ethical process (Desai et al., 2008). Ethical theories in moral philosophy encompass metaethics, which examines the essence of ethics and moral reasoning, normative ethics, which aims to ascertain the principles that govern moral behavior, including deontological, consequentialist, and virtue ethics, and practical ethics (Stahl et al., 2014). Applying ethics to digital technology involves the application of normative ethics to evaluate specific and practical instances. Alternative ethical frameworks can be utilized to analyze digital technology. For instance, "ethics by design" refers to the incorporation of ethical considerations into the development of technology. "Ethics of usage" pertains to the ethical implications of how technology is used (Ellßel & Flemming, 2022). "Societal ethics" emphasizes the promotion of societal advantages resulting from the utilization of technology.
The ethical considerations and outcomes of the evaluation may vary depending on the approach used and the timing of the assessment, such as whether it is done during the design phase, throughout usage, or at the end to evaluate the total social impact (Rahman & Jim, 2024). This is also expected to be incorporated into the design and implementation of the technology, along with other fundamental moral perspectives (such as the societal or individual perception of 'good'). Consequently, these values can be integrated into the technology to either promote or diminish specific moral principles and will become evident through the use of the technology. HTTP cookies enable the tracking of internet users and the personalization of content, hence diminishing the importance of online privacy (Tino et al., 2024). There is a conflict between the designer/technologist and the end user/society, where the benefits (and drawbacks) experienced by one group may be in opposition to the other (North et al., 2006). The ethical quandary is a pervasive obstacle for information and communication technologies (ICTs). Technologists and users will determine which ethical ideals prevail among competing options. The ethical considerations surrounding digital technologies are inherently complex, as they include weighing the potential benefits and drawbacks that can differ among different parties and rely on the exact environment in which the technologies are employed (Aditto et al., 2023; Kabbo et al., 2023; Rahman & Jim, 2024; Sobuz, Khan, et al., 2024). Arguably, the most widespread illustration is the issue of data privacy. Examining users' data can enhance the entire user experience of a digital platform, or it can be employed to monitor and trace individuals, as seen in public health Covid-tracing apps, ultimately leading to social advantages. Nevertheless, there may be a trade-off for the end-user (i.e. individual citizens) and apprehensions over the misuse of personal data (Shou, 2012). This raises additional inquiries concerning the ownership and availability of data. Precision presents a further quandary. Utilizing collected data for automated decision making can enhance efficiency and mitigate human prejudice. Nevertheless, the presence of erroneous data resulting from input mistakes or wrong inferences might potentially harm individuals or compromise the dependability of decisions made using data. Nevertheless, in the majority of instances, the utilization of data or artificial intelligence (AI) can enhance the process of making informed decisions.
Utilizing technology can provide substantial advantages to individuals and the broader community. Digital technologies provide a direct impact on the economy by creating jobs and adding value, as well as increasing productivity in other industries where they are utilized. Users derive advantages from digital technologies and the accompanying services, as well as social benefits from communication networks. Under certain conditions, technology might have detrimental repercussions or be employed for malevolent purposes. Two significant instances are worth mentioning: the Cambridge Analytica scandal, where Cambridge Analytica obtained personal data from 50 million Facebook users without their consent, and then used it to analyze and target individuals during political elections; and the concerns surrounding facial recognition technology, specifically regarding the potential racial bias in algorithms and the discriminatory targeting of certain racial groups.
Fig.1. Biggest challenges in digital ethics (Allahrakha, 2023).
Fig.1 shows the biggest challenges in digital ethics. The utilization of migration control technologies in immigration and deportation determinations has faced significant criticism due to potential violations of human rights caused by flawed algorithms or the compulsory use of iris scanning to access refugee services, which lacks voluntary and informed consent. It is crucial to recognize that certain business strategies may include technology in a manner that goes against ethical principles. For instance, companies may collect and analyze data to customize their services or goods for users. Social media and video-sharing platforms (VSPs) have faced criticism for fostering political echo chambers or filter bubbles. These platforms use algorithms that rely on user profile data, network connections, and past activity to promote content, curate homepage displays, and queue videos. These features were designed to ensure user engagement on the site, but they may unintentionally result in users being exposed exclusively to information that reinforces a specific narrative, which is not considered socially ideal.4. Likewise, the dissemination of false information and fabricated news not only causes harm to society but also generates advertising money. Technology has a dual role in both amplifying and combating disinformation. Digital platforms utilize artificial intelligence and machine learning (AI/ML) to generate and host material, including misinformation. However, these same technologies are also employed to audit and eliminate false information. Occasionally, companies may employ data to purposefully or unwittingly engage in discriminatory practices against certain individuals.
Apple Card underwent an investigation in November 2019 due to allegations of gender discrimination. It was reported that male customers were granted considerably greater credit limits compared to women who had identical or equivalent credit histories. This was associated with an algorithm utilized by Apple and Goldman Sachs to assess the creditworthiness of applicants. The utilization of ICTs and the rise of new applications has led to an increase in ethical problems and dilemmas. The ubiquity of technology in daily life – its utilization, construction, and influence on organizations and individuals – raises ethical inquiries regarding its function inside contemporary society. There is a growing acknowledgment that it will be essential to include ethics into ICTs, either through regulation, design, or other methods, in order to maximize the value provided to all stakeholders. Fig. 2 describes the telia company guiding principles on trusted AI ethics.
Fig. 2. Telia company guiding principles on trusted AI ethics.
In conclusion, we emphasize four crucial factors to be taken into account in relation to this or any alternative ethical approach.
A significant continuous challenge is to ensure that ethical frameworks and tools are adaptive and continue to be relevant to emerging technologies. An adaptable and future-oriented strategy should be at the core of the development of governance tools, ranging from explicit laws and regulations to self-regulatory measures, such as industry oversight and self-certification. However, the advancement of technology will surpass existing governance mechanisms, giving rise to new ethical challenges, and necessitating ongoing evaluation of the usefulness and efficiency of these governance systems.
Cultural norms and ethical principles exhibit variation among different countries and cultures. Due to the inherent characteristics of the Internet and digital technology, the implementation of ethics requires worldwide collaboration on standards and governance while also considering varied cultural viewpoints. The number is 63. The inherent tension, intensified by distrust and practical coordination difficulties, can only be resolved by cross-cultural collaboration and the sharing of optimal methods. According to ÓhÉigeartaigh et al (2020), potential obstacles to cross-cultural collaboration primarily stem from cultural mistrust rather than fundamental differences. Multinational IT firms face a difficulty when it comes to managing diverse cultural norms while offering services and products in various geographical locations. Multinational corporations have the option to either follow a fragmented strategy, where ethical principles and actions vary depending on the location, or they can choose to adhere to the highest standard universally. Microsoft has chosen to pursue the latter strategy in relation to personal data and the EU's GDPR law, which includes advocating for comparable legislation in the USA.
Actors who are subject to the regulations and guidelines for digital technologies frequently build ethics frameworks and governance tools, such as principles, initiatives, and laws. The aforementioned activities have brought attention to the increasing concern for ethics across numerous major private corporations. While this may indicate an increasing awareness of corporate social responsibility, with firms demonstrating their trustworthiness and accountability to consumers and governments, opponents have identified two major concerns with this strategy.
An essential obstacle will be to instruct and enable persons in their utilization of digital technology (Hasan et al., 2023; North et al., 2006; Sobuz, Joy, et al., 2024). Aligning incentives for states and private firms is crucial for the implementation of ethical practices. Private enterprises typically bear the financial burden of adhering to ethical practices, and they stand to gain from the preferences of consumers who prioritize ethical considerations when selecting services and products. Active citizens and customers will also guarantee efficient public accountability and examination of actions taken by both the government and business entities. (Fig. 3)
Fig. 3. Share of employees who believe to have experienced use of AI by their organization that resulted in ethical issues (Kushwaha et al., 2016).
Companies have created internal codes of conduct, which offer guidelines and directives for its staff to follow. Google's "AI principles" (2018) and its "Responsible AI Practices" aim to promote the sharing of research results, integrate insights into their operations, and evolve continuously. The implementation of responsible AI has been facilitated by the Microsoft AI Principles (2018). ARM prioritizes the integration of ethical considerations into its "Trust Manifesto."
Australia has an official position called the "e-safety commissioner" that assists Australians facing online harassment by enabling them to file complaints. This commissioner also possesses the authority to legally compel social media companies to remove specific content. In mid2019, the American Ministry of Justice initiated an anti-trust investigation targeting major technology corporations. The investigation recently concluded, revealing the extraordinary financial and influential power of these platforms, as well as their anti-competitive practices. The Ministry has called for stringent actions, including the potential dismantling of these big tech companies. In October 2020, France and the Netherlands submitted a collaborative proposal to the European Commission, urging for more stringent regulations to oversee entities known as 'gatekeepers'. The European Union, under the Digital Services Act, intends to request greater transparency from major technology companies on the data they acquire from users. Additionally, the EU plans to introduce new regulations targeting fake news and dangerous content. The UK Information Commissioner's Office has issued a fine of £18.4 million on Marriott International Hotels for their failure to adequately protect consumer data after a cyberattack that affected 339 million guest records.
An increasing number of corporations are addressing the matter of ethics in digital technology within their own organizations. One of the issues they have in this field is to satisfy customers' expectations regarding the product or service provided, while also safeguarding the trust of customers, business partners, and governments. For instance, users may desire to view their location on a map application immediately upon opening it, but they may not want to be continuously monitored. This pertains to the identification of a balance between innovation, consumer experience, and ethics. Various tools have been created by both technology and non-technology companies to facilitate the sharing of knowledge and the incorporation of ethical and responsible practices in the design, development, utilization, and administration of their technological goods and services. Companies have mostly focused on platforms, data, and AI technologies from an ethical standpoint, as these areas often have significant intersections, as presented in Table 1.
Table 1. Comparisons of high and low ethical safeguard groups (Kushwaha et al., 2016).
Ultimately, the examination and evaluation of ethics in EI substantiate the notion that ethical conduct in this domain has a profound influence on the entire business. Moreover, IS management functions within the broader organizational framework and the ethical integrity of its activities mirror those of the organizational culture. Although IS professionals strive to act responsibly and ethically, they may be influenced by the prevailing atmosphere of misconduct and give in to the demands of expediency, even if they are inherently principled. Supports, in the form of safeguards, are crucial for directing and strengthening their noble objectives.
The consequences f or organizations are evident and filled with cautionary notes. However, it is worth noting that organizations that prioritize ethical precautions not only give necessary support but also experience significant benefits in terms of employee morale, performance, and, ultimately, their financial profitability. In the realm of emotional intelligence (EI) management, the establishment of a trusting environment alleviates the weight of responsibility for everyone engaged. For the staff, it reduces the burden of constantly facing challenges to their integrity. This allows them to concentrate their energies more fully on the current task and their objective.
Funding: This research did not receive any specific funding.
Conflicts of interest: The authors declare no conflict of interest that could have appeared to influence the work reported in this paper.
Aditto, F. S., Sobuz, M. H. R., Saha, A., Jabin, J. A., Kabbo, M. K. I., Hasan, N. M. S., & Islam, S. (2023). Fresh, mechanical and microstructural behaviour of highstrength self-compacting concrete using supplementary cementitious materials. Case Studies in Construction Materials, 19, e02395.
Allahrakha, N. (2023). Balancing cyber-security and privacy: legal and ethical considerations in the digital age. Legal Issues in the digital Age(2), 78-121.
Desai, M. S., Von der Embse, T. J., & Ofori-Brobbey, K. (2008). Information technology and electronic information: an ethical dilemma. SAM Advanced Management Journal, 73(3), 16.
Ellßel, C., & Flemming, D. (2022). Data Security, Cybersecurity, Legal and Ethical Implications for Digital Health:: A European Perspective. In Nursing and Informatics for the 21st Century-Embracing a Digital World, 3rd Edition, Book 4 (pp. 129-148). Productivity Press.
Hasan, N. M. S., Sobuz, M. H. R., Shaurdho, N. M. N., Meraz, M. M., Datta, S. D., Aditto, F. S., Kabbo, M. K. I., & Miah, M. J. (2023). Eco-friendly concrete incorporating palm oil fuel ash: Fresh and mechanical properties with machine learning prediction, and sustainability assessment. Heliyon, 9(11).
Kabbo, M., Sobuz, M., & Khan, M. (2023). Combined influence of Waste Marble Powder and Silica Fume on the Mechanical Properties of Structural Cellular Lightweight Concrete. International Conference on Planning, Architecture & Civil Engineering.
Kushwaha, P. K., Bibhu, V., Lohani, B. P., & Singh, D. (2016). Review on information security, laws and ethical issues with online financial system. 2016 International Conference on Innovation and Challenges in Cyber Security (ICICCS-INBUSH),
Ngoepe, M., Mokoena, L., & Ngulube, P. (2010). Security, privacy and ethics in electronic records management in the South African public sector. Esarbica Journal, 29.
North, M. M., George, R., & North, S. M. (2006). Computer Security and ethics awareness in university environments: A challenge for management of information systems. Proceedings of the 44th annual Southeast regional conference,
Rahman, M. A., & Jim, M. M. I. (2024). Addressing Privacy And Ethical Considerations In Health Information Management Systems (IMS). International Journal of Health and Medical, 1(2), 1-13.
Sarathy, R., & Robertson, C. J. (2003). Strategic and ethical considerations in managing digital privacy. Journal of Business ethics, 46, 111-126.
Shou, D. (2012). Ethical considerations of sharing data for cybersecurity research. Financial Cryptography and Data Security: FC 2011 Workshops, RLCPS and WECSR 2011, Rodney Bay, St. Lucia, February 28-March 4, 2011, Revised Selected Papers 15,
Sobuz, M. H. R., Joy, L. P., Akid, A. S. M., Aditto, F. S., Jabin, J. A., Hasan, N. M. S., Meraz, M. M., Kabbo, M. K. I., & Datta, S. D. (2024). Optimization of recycled rubber self-compacting concrete: Experimental findings and machine learning-based evaluation. Heliyon, 10(6)
Sobuz, M. H. R., Khan, M. H., Kabbo, M. K. I., Alhamami, A. H., Aditto, F. S., Sajib, M. S., Alengaram, U. J., Mansour, W., Hasan, N. M. S., & Datta, S. D. (2024). Assessment of mechanical properties with machine learning modeling and durability, and microstructural characteristics of a biochar-cement mortar composite. Construction and Building Materials, 411, 134281.
Stahl, B. C., Doherty, N. F., Shaw, M., & Janicke, H. (2014). Critical theory as an approach to the ethics of information security. Science and Engineering Ethics, 20, 675-699.
Tino, C. F., Becker, A. C., Pereira, B., da Rosa Corrêa, L., Soares, M. L., & Nascimento, D. (2024). Ethical, legal, and information management aspects in the context of patient safety. Revista de Gestão e Secretariado, 15(1), 167-179.