Evaluating the Ethical Practices in Developing AI and Ml Systems in Tanzania

Main Article Content

Lazaro Inon Kumbo
Victor Simon Nkwera
Rodrick Frank Mero

Abstract

Artificial Intelligence (AI) and Machine Learning (ML) present transformative opportunities for sectors in developing countries like Tanzania that were previously hindered by manual processes and data inefficiencies. Despite these advancements, the ethical challenges of bias, fairness, transparency, privacy, and accountability are critical during AI and ML system design and deployment. This study explores these ethical dimensions from the perspective of Tanzanian IT professionals, given the country's nascent AI landscape. The research aims to understand and address these challenges using a mixed-method approach, including case studies, a systematic literature review, and critical analysis. Findings reveal significant concerns about algorithm bias, the complexity of ensuring fairness and equity, transparency and explainability, which are crucial for promoting trust and understanding among users, and heightened privacy and security risks. The study underscores the importance of integrating ethical considerations throughout the development lifecycle of AI and ML systems and the necessity of robust regulatory frameworks. Recommendations include developing targeted regulatory guidelines, providing comprehensive training for IT professionals, and fostering public trust through transparency and accountability. This study underscores the importance of ethical AI and ML practices to ensure responsible and equitable technological development in Tanzania.

Article Details

How to Cite
[1]
L. I. Kumbo, V. S. Nkwera, and R. F. Mero, “Evaluating the Ethical Practices in Developing AI and Ml Systems in Tanzania”, AJERD, vol. 7, no. 2, pp. 340–351, Sep. 2024.
Section
Articles

References

Shen, Y. & Zhang, X. (2024). The impact of artificial intelligence on employment: the role of virtual agglomeration.

Humanities & Social Sciences Communications, 11(1). https://doi.org/10.1057/s41599-024-02647-9 DOI: https://doi.org/10.1057/s41599-024-02647-9

Buijsse, R., Willemsen, M. & Snijders, C. (2023). Data-Driven Decision-Making. In Classroom Companion: Business

–277. https://doi.org/10.1007/978-3-031-19554-9_11 DOI: https://doi.org/10.1007/978-3-031-19554-9_11

Applegarth, D. M., Lewis, R. A. & Rief, R. M. (2023). Imperfect Tools: A research note on Developing, Applying, and Increasing understanding of criminal justice risk assessments. Criminal Justice Policy Review, 34(4), 319–336. https://doi.org/10.1177/08874034231180505 DOI: https://doi.org/10.1177/08874034231180505

Gul, R. & Al-Faryan, M. a. S. (2023). From insights to impact: Leveraging data analytics for Data-driven decision-making and productivity in the banking sector. Humanities & Social Sciences Communications, 10(1), 1-8. https://doi.org/10.1057/s41599-023-02122-x DOI: https://doi.org/10.1057/s41599-023-02122-x

Tierney, R. D. (2017). Fairness in educational assessment. In Encyclopedia of Educational Philosophy and Theory, 793-798. Springer. https://doi.org/10.1007/978-981-287-588-4_400 DOI: https://doi.org/10.1007/978-981-287-588-4_400

Manure, A., Bengani, S., & Saravanan, S. (2023). Transparency and explainability. In Artificial Intelligence and Its Applications, 61-106. Apress. https://doi.org/10.1007/978-1-4842-9982-1_3 DOI: https://doi.org/10.1007/978-1-4842-9982-1_3

Balasubramaniam, N., Kauppinen, M., Rannisto, A., Hiekkanen, K. & Kujala, S. (2023). Transparency and Explainability of AI systems: From ethical guidelines to requirements. Information & Software Technology, 159, 107197. https://doi.org/10.1016/j.infsof.2023.107197 DOI: https://doi.org/10.1016/j.infsof.2023.107197

Novelli, C., Taddeo, M. & Floridi, L. (2023). Accountability in artificial intelligence: What it Moreover, how it Works. AI & Society, 38(1), 203–221. https://doi.org/10.1007/s00146-023-01635-y DOI: https://doi.org/10.1007/s00146-023-01635-y

Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 1-15. https://doi.org/10.1057/s41599-023-02079-x DOI: https://doi.org/10.1057/s41599-023-02079-x

Fazelpour, S. & Danks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy Compass, 16(8), e12760. https://doi.org/10.1111/phc3.12760 DOI: https://doi.org/10.1111/phc3.12760

Chu, C. H., Donato-Woodger, S., Khan, S. S., Nyrup, R., Leslie, K., Lyn, A., Shi, T., Bianchi, A., Rahimi, S. A. & Grenier, A. (2023). Age-related bias and artificial intelligence: a scoping review. Humanities and Social Sciences Communications, 10(1), 1-17. https://doi.org/10.1057/s41599-023-01999-y DOI: https://doi.org/10.1057/s41599-023-01999-y

Manure, A., Bengani, S., & Saravanan, S. (2023). Bias and fairness. In AI and Data Ethics: Balancing Bias and Fairness in Decision-Making (pp. 23–60). Apress. https://doi.org/10.1007/978-1-4842-9982-1_2 DOI: https://doi.org/10.1007/978-1-4842-9982-1_2

Zhang, A., Xing, L., Zou, J., & Wu, J. C. (2022). Shifting machine learning for healthcare from development to

Deployment and from models to data. Nature Biomedical Engineering, 6(12), 1330–1345. https://doi.org/10.1038/s41551-022-00898-y DOI: https://doi.org/10.1038/s41551-022-00898-y

Helm, P., Bella, G., Koch, G., & Giunchiglia, F. (2024). Diversity and language technology: How language modelling bias Causes epistemic injustice. Ethics and Information Technology, 26(1), 1-20. https://doi.org/10.1007/s10676-023-09742-6 DOI: https://doi.org/10.1007/s10676-023-09742-6

Graves, J. M., Abshire, D. A., Amiri, S., & Mackelprang, J. L. (2021). Disparities in technology and broadband internet Access across rurality. Family & Community Health, 44(4), 257–265. https://doi.org/10.1097/fch.0000000000000306 DOI: https://doi.org/10.1097/FCH.0000000000000306

Almatrafi, O., Johri, A., & Lee, H. (2024). A Systematic Review of AI Literacy Conceptualization, Constructs, and

Implementation and assessment efforts (2019-2023). Computers and Education Open, 100173. https://doi.org/10.1016/j.caeo.2024.100173 DOI: https://doi.org/10.1016/j.caeo.2024.100173

Lara, M. a. R., Echeveste, R. & Ferrante, E. (2022). Addressing fairness in artificial intelligence for medical imaging.

Nature Communications, 13(1), 1-11. https://doi.org/10.1038/s41467-022-32186-3 DOI: https://doi.org/10.1038/s41467-022-32186-3

Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M. & Floridi, L. (2021). The ethics of

Algorithms: key problems and solutions. AI & Society, 37(1), 215–230. https://doi.org/10.1007/s00146-021-01154-8 DOI: https://doi.org/10.1007/s00146-021-01154-8

Alzubaidi, L., Bai, J., Al-Sabaawi, A., Santamaría, J., Albahri, A. S., Al-Dabbagh, B. S. N., Fadhel, M. A., Manoufali,

M., Zhang, J., Al-Timemy, A. H., Duan, Y., Abdullah, A., Farhan, L., Lü, Y., Gupta, A., Albu, F., Abbosh, A. & Gu, Y. (2023). A survey on deep learning Tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications. Journal of Big Data, 10(1), 1–82. https://doi.org/10.1186/s40537-023-00727-2 DOI: https://doi.org/10.1186/s40537-023-00727-2

Bongo, M. F. & Sy, C. (2023). Can the diverse and Conflicting interests of multiple Stakeholders Be Balanced? Annals of Operations Research, 324(2), 589-613. https://doi.org/10.1007/s10479-023-05253-1 DOI: https://doi.org/10.1007/s10479-023-05253-1

Ferrara, C., Sellitto, G., Ferrucci, F., Palomba, F. & De Lucia, A. (2023). Fairness-aware Machine Learning

Engineering: How far are we? Empirical Software Engineering, 29(1), 1-42. https://doi.org/10.1007/s10664-023-10402-y DOI: https://doi.org/10.1007/s10664-023-10402-y

Maghool, S., Casiraghi, E. & Ceravolo, P. (2023). Enhancing machine fairness and Accuracy through similarity networks. In Lecture Notes in Computer Science,10(5), 3–20. https://doi.org/10.1007/978-3-031-46846-9_1 DOI: https://doi.org/10.1007/978-3-031-46846-9_1

Shams, R. A., Zowghi, D. & Bano, M. (2023). AI and the Quest for diversity and inclusion: a Systematic Literature Review. AI And Ethics, 3(1), 1-33. https://doi.org/10.1007/s43681-023-00362-w DOI: https://doi.org/10.1007/s43681-023-00362-w

Marcinkevičs, R. & Vogt, J. E. (2023). Interpretable and Explainable Machine Learning: A Methods-centric Overview With concrete examples. WIREs Data Mining and Knowledge Discovery, 13(3), 1-20. https://doi.org/10.1002/widm.1493 DOI: https://doi.org/10.1002/widm.1493

Lo Piano, S. (2020). Ethical principles in machine Learning and artificial intelligence: cases from the field and possible ways forward. Humanities and Social Sciences Communications, 7(1), 1-12. https://doi.org/10.1057/s41599-020-0501-9 DOI: https://doi.org/10.1057/s41599-020-0501-9

Ross, A. (2022). AI and the expert; a blueprint for the ethical use of opaque AI. AI & Society, 9(3), 961-974. https://doi.org/10.1007/s00146-022-01564-2 DOI: https://doi.org/10.1007/s00146-022-01564-2

Agarwal, R., Bjarnadóttir, M. V., Rhue, L. A., Dugas, M., Crowley, K., Clark, J. & Gao, G. (2023). Addressing Algorithmic bias and perpetuating health inequities: An AI bias aware framework. Health Policy and Technology, 12(1), 100702. https://doi.org/10.1016/j.hlpt.2022.100702 DOI: https://doi.org/10.1016/j.hlpt.2022.100702

Janga, J. K., Reddy, K. R. & Kvns, R. (2023). Integrating Artificial intelligence, machine Learning, Moreover, Deep learning approaches into remediation of contaminated sites: A review. Chemosphere, 345, 140476. https://doi.org/10.1016/j.chemosphere.2023.140476 DOI: https://doi.org/10.1016/j.chemosphere.2023.140476

Schmitt, M. (2023). Automated machine learning: AI-driven decision-making in business analytics. Intelligent DOI: https://doi.org/10.2139/ssrn.4151621

Systems With Applications, 14(1), 18-38. https://doi.org/10.1016/j.iswa.2023.200188 DOI: https://doi.org/10.1016/j.iswa.2023.200188

Cancela, J., Charlafti, I., Colloud, S. & Wu, C. J. (2021). Digital health in the era of personalised healthcare. In Elsevier eBooks, 7–31. https://doi.org/10.1016/b978-0-12-820077-3.00002-x DOI: https://doi.org/10.1016/B978-0-12-820077-3.00002-X

Quach, S., Quach, S., Martin, K. D., Weaven, S. & Palmatier, R. W. (2022). Digital Technologies: Tensions in privacy and data. Journal of the Academy of Marketing Science, 50(6), 1299–1323. https://doi.org/10.1007/s11747-022-00845-y DOI: https://doi.org/10.1007/s11747-022-00845-y

Camilleri, M. A. (2023). Artificial intelligence governance: Ethical considerations and implications for social Responsibility. Expert Systems, 40(2), 1-15. https://doi.org/10.1111/exsy.13406 DOI: https://doi.org/10.1111/exsy.13406

Mondschein, C. & Monda, C. (2018). The EU’s General Data Protection Regulation (GDPR) In A Research Context. In Springer eBooks, 55–71. https://doi.org/10.1007/978-3-319-99713-1_5 DOI: https://doi.org/10.1007/978-3-319-99713-1_5

Benzie, A. L. & Montasari, R. (2023). Bias, Privacy, and Mistrust: Considering the Ethical Challenges of Artificial Intelligence. In Advanced sciences and technologies for security applications, 1(1), 1–14. https://doi.org/10.1007/978-3-031-40118-3_1 DOI: https://doi.org/10.1007/978-3-031-40118-3_1

Robinson, R. S. (2014). Purposive sampling. In A. C. Michalos (Ed.), Encyclopedia of Quality of Life and Well-Being Research, 5243-5245. Springer. https://doi.org/10.1007/978-94-007-0753-5_2337 DOI: https://doi.org/10.1007/978-94-007-0753-5_2337

Raifman, S., DeVost, M. A., Digitale, J. C., Chen, Y. & Morris, M. D. (2022). Respondent-Driven Sampling: A Sampling Method for Hard-to-Reach Populations and Beyond. Current Epidemiology Reports, 9(1), 38–47. https://doi.org/10.1007/s40471-022-00287-8 DOI: https://doi.org/10.1007/s40471-022-00287-8

Birhane, A. (2022). The unseen Black faces of AI Algorithms. Nature, 610(7932), 451–452. https://doi.org/10.1038/d41586-022-03050-7 DOI: https://doi.org/10.1038/d41586-022-03050-7

Wehrli, S., Hertweck, C., Amirian, M., Glüge, S. & Stadelmann, T. (2021). Bias, awareness, and ignorance in Deep-learning-based face recognition. AI And Ethics, 2(3), 509–522. https://doi.org/10.1007/s43681-021-00108-6 DOI: https://doi.org/10.1007/s43681-021-00108-6

Drage, E. & Mackereth, K. (2022). Does AI derbies Recruitment? Race, gender, and AI’s “Eradication of Difference.” Philosophy & Technology, 35(4), 89-110. https://doi.org/10.1007/s13347-022-00543-1 DOI: https://doi.org/10.1007/s13347-022-00543-1

Hung, T. & Yen, C. (2023). Predictive policing and Algorithmic fairness. Synthese, 201(6), 206-2026. https://doi.org/10.1007/s11229-023-04189-0 DOI: https://doi.org/10.1007/s11229-023-04189-0

Meijer, A. & Wessels, M. (2019). Predictive Policing: Review of benefits and drawbacks. International Journal of Public Administration, 42(12), 1031–1039. https://doi.org/10.1080/01900692.2019.1575664 DOI: https://doi.org/10.1080/01900692.2019.1575664

Chen, R. J., Wang, J. J., Williamson, D. F. K., Chen, T., Lipková, J., Lu, M., Sahai, S. & Mahmood, F. (2023). Algorithmic fairness in artificial intelligence for medicine and healthcare. Nature Biomedical Engineering, 7(6), 719–742. https://doi.org/10.1038/s41551-023-01056-8 DOI: https://doi.org/10.1038/s41551-023-01056-8

Hodonu-Wusu, J. O. (2024). The rise of artificial intelligence in libraries: the ethical and equitable methodologies, and prospects for empowering library users. AI And Ethics,1(3), 137-155. https://doi.org/10.1007/s43681-024-00432-7 DOI: https://doi.org/10.1007/s43681-024-00432-7

Cervi, G. V. (2022). Why and How Does the EU Rule Global Digital Policy: An Empirical Analysis of EU

Regulatory Influence in Data Protection Laws. Digital Society 1(2), 8-30. https://doi.org/10.1007/s44206-022-00005-3 DOI: https://doi.org/10.1007/s44206-022-00005-3